00:00:00.001 Started by upstream project "autotest-per-patch" build number 132049 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.048 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.049 using credential 00000000-0000-0000-0000-000000000002 00:00:00.051 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.076 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.120 Using shallow fetch with depth 1 00:00:00.120 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.120 > git --version # timeout=10 00:00:00.184 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.239 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.239 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.518 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.530 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.542 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:03.542 > git config core.sparsecheckout # timeout=10 00:00:03.552 > git read-tree -mu HEAD # timeout=10 00:00:03.568 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:03.586 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:03.586 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:03.692 [Pipeline] Start of Pipeline 00:00:03.705 [Pipeline] library 00:00:03.752 Loading library shm_lib@master 00:00:03.753 Library shm_lib@master is cached. Copying from home. 00:00:03.770 [Pipeline] node 00:00:03.781 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.783 [Pipeline] { 00:00:03.793 [Pipeline] catchError 00:00:03.795 [Pipeline] { 00:00:03.807 [Pipeline] wrap 00:00:03.816 [Pipeline] { 00:00:03.823 [Pipeline] stage 00:00:03.825 [Pipeline] { (Prologue) 00:00:03.843 [Pipeline] echo 00:00:03.845 Node: VM-host-SM17 00:00:03.851 [Pipeline] cleanWs 00:00:03.861 [WS-CLEANUP] Deleting project workspace... 00:00:03.861 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.867 [WS-CLEANUP] done 00:00:04.064 [Pipeline] setCustomBuildProperty 00:00:04.155 [Pipeline] httpRequest 00:00:04.556 [Pipeline] echo 00:00:04.557 Sorcerer 10.211.164.101 is alive 00:00:04.564 [Pipeline] retry 00:00:04.566 [Pipeline] { 00:00:04.576 [Pipeline] httpRequest 00:00:04.581 HttpMethod: GET 00:00:04.581 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:04.582 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:04.600 Response Code: HTTP/1.1 200 OK 00:00:04.601 Success: Status code 200 is in the accepted range: 200,404 00:00:04.601 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:16.400 [Pipeline] } 00:00:16.418 [Pipeline] // retry 00:00:16.425 [Pipeline] sh 00:00:16.704 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:16.719 [Pipeline] httpRequest 00:00:17.979 [Pipeline] echo 00:00:17.980 Sorcerer 10.211.164.101 is alive 00:00:17.989 [Pipeline] retry 00:00:17.991 [Pipeline] { 00:00:18.002 [Pipeline] httpRequest 00:00:18.007 HttpMethod: GET 00:00:18.007 URL: http://10.211.164.101/packages/spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:00:18.008 Sending request to url: http://10.211.164.101/packages/spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:00:18.013 Response Code: HTTP/1.1 200 OK 00:00:18.014 Success: Status code 200 is in the accepted range: 200,404 00:00:18.014 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:01:35.844 [Pipeline] } 00:01:35.863 [Pipeline] // retry 00:01:35.871 [Pipeline] sh 00:01:36.153 + tar --no-same-owner -xf spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:01:39.463 [Pipeline] sh 00:01:39.783 + git -C spdk log --oneline -n5 00:01:39.783 d0fd7ad59 lib/reduce: Add a chunk data read/write cache 00:01:39.783 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:01:39.783 12fc2abf1 test: Remove autopackage.sh 00:01:39.783 83ba90867 fio/bdev: fix typo in README 00:01:39.783 45379ed84 module/compress: Cleanup vol data, when claim fails 00:01:39.801 [Pipeline] writeFile 00:01:39.815 [Pipeline] sh 00:01:40.098 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:40.111 [Pipeline] sh 00:01:40.392 + cat autorun-spdk.conf 00:01:40.392 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.392 SPDK_RUN_ASAN=1 00:01:40.392 SPDK_RUN_UBSAN=1 00:01:40.392 SPDK_TEST_RAID=1 00:01:40.392 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.399 RUN_NIGHTLY=0 00:01:40.401 [Pipeline] } 00:01:40.414 [Pipeline] // stage 00:01:40.428 [Pipeline] stage 00:01:40.430 [Pipeline] { (Run VM) 00:01:40.443 [Pipeline] sh 00:01:40.725 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:40.725 + echo 'Start stage prepare_nvme.sh' 00:01:40.725 Start stage prepare_nvme.sh 00:01:40.725 + [[ -n 5 ]] 00:01:40.725 + disk_prefix=ex5 00:01:40.725 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:40.725 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:40.725 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:40.725 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.725 ++ SPDK_RUN_ASAN=1 00:01:40.725 ++ SPDK_RUN_UBSAN=1 00:01:40.725 ++ SPDK_TEST_RAID=1 00:01:40.725 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.725 ++ RUN_NIGHTLY=0 00:01:40.725 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:40.725 + nvme_files=() 00:01:40.725 + declare -A nvme_files 00:01:40.725 + backend_dir=/var/lib/libvirt/images/backends 00:01:40.725 + nvme_files['nvme.img']=5G 00:01:40.725 + nvme_files['nvme-cmb.img']=5G 00:01:40.725 + nvme_files['nvme-multi0.img']=4G 00:01:40.725 + nvme_files['nvme-multi1.img']=4G 00:01:40.725 + nvme_files['nvme-multi2.img']=4G 00:01:40.725 + nvme_files['nvme-openstack.img']=8G 00:01:40.725 + nvme_files['nvme-zns.img']=5G 00:01:40.725 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:40.725 + (( SPDK_TEST_FTL == 1 )) 00:01:40.725 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:40.725 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:40.725 + for nvme in "${!nvme_files[@]}" 00:01:40.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:40.725 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.725 + for nvme in "${!nvme_files[@]}" 00:01:40.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:40.725 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.725 + for nvme in "${!nvme_files[@]}" 00:01:40.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:40.725 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:40.725 + for nvme in "${!nvme_files[@]}" 00:01:40.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:40.725 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.725 + for nvme in "${!nvme_files[@]}" 00:01:40.725 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:40.725 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.726 + for nvme in "${!nvme_files[@]}" 00:01:40.726 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:40.726 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.726 + for nvme in "${!nvme_files[@]}" 00:01:40.726 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:40.985 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.985 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:40.985 + echo 'End stage prepare_nvme.sh' 00:01:40.985 End stage prepare_nvme.sh 00:01:40.996 [Pipeline] sh 00:01:41.277 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:41.277 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:41.277 00:01:41.277 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:41.277 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:41.277 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:41.277 HELP=0 00:01:41.277 DRY_RUN=0 00:01:41.277 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:41.277 NVME_DISKS_TYPE=nvme,nvme, 00:01:41.277 NVME_AUTO_CREATE=0 00:01:41.277 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:41.277 NVME_CMB=,, 00:01:41.277 NVME_PMR=,, 00:01:41.277 NVME_ZNS=,, 00:01:41.277 NVME_MS=,, 00:01:41.277 NVME_FDP=,, 00:01:41.277 SPDK_VAGRANT_DISTRO=fedora39 00:01:41.277 SPDK_VAGRANT_VMCPU=10 00:01:41.278 SPDK_VAGRANT_VMRAM=12288 00:01:41.278 SPDK_VAGRANT_PROVIDER=libvirt 00:01:41.278 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:41.278 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:41.278 SPDK_OPENSTACK_NETWORK=0 00:01:41.278 VAGRANT_PACKAGE_BOX=0 00:01:41.278 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:41.278 FORCE_DISTRO=true 00:01:41.278 VAGRANT_BOX_VERSION= 00:01:41.278 EXTRA_VAGRANTFILES= 00:01:41.278 NIC_MODEL=e1000 00:01:41.278 00:01:41.278 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:41.278 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:43.812 Bringing machine 'default' up with 'libvirt' provider... 00:01:44.380 ==> default: Creating image (snapshot of base box volume). 00:01:44.641 ==> default: Creating domain with the following settings... 00:01:44.641 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730776377_9ac46b3e0b7ad81716b2 00:01:44.641 ==> default: -- Domain type: kvm 00:01:44.641 ==> default: -- Cpus: 10 00:01:44.641 ==> default: -- Feature: acpi 00:01:44.641 ==> default: -- Feature: apic 00:01:44.641 ==> default: -- Feature: pae 00:01:44.641 ==> default: -- Memory: 12288M 00:01:44.641 ==> default: -- Memory Backing: hugepages: 00:01:44.641 ==> default: -- Management MAC: 00:01:44.641 ==> default: -- Loader: 00:01:44.641 ==> default: -- Nvram: 00:01:44.641 ==> default: -- Base box: spdk/fedora39 00:01:44.641 ==> default: -- Storage pool: default 00:01:44.641 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730776377_9ac46b3e0b7ad81716b2.img (20G) 00:01:44.641 ==> default: -- Volume Cache: default 00:01:44.641 ==> default: -- Kernel: 00:01:44.641 ==> default: -- Initrd: 00:01:44.641 ==> default: -- Graphics Type: vnc 00:01:44.641 ==> default: -- Graphics Port: -1 00:01:44.641 ==> default: -- Graphics IP: 127.0.0.1 00:01:44.641 ==> default: -- Graphics Password: Not defined 00:01:44.641 ==> default: -- Video Type: cirrus 00:01:44.641 ==> default: -- Video VRAM: 9216 00:01:44.641 ==> default: -- Sound Type: 00:01:44.641 ==> default: -- Keymap: en-us 00:01:44.641 ==> default: -- TPM Path: 00:01:44.641 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:44.641 ==> default: -- Command line args: 00:01:44.641 ==> default: -> value=-device, 00:01:44.641 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:44.641 ==> default: -> value=-drive, 00:01:44.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:44.641 ==> default: -> value=-device, 00:01:44.641 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.641 ==> default: -> value=-device, 00:01:44.641 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:44.641 ==> default: -> value=-drive, 00:01:44.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:44.641 ==> default: -> value=-device, 00:01:44.641 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.641 ==> default: -> value=-drive, 00:01:44.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:44.641 ==> default: -> value=-device, 00:01:44.641 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.641 ==> default: -> value=-drive, 00:01:44.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:44.641 ==> default: -> value=-device, 00:01:44.641 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.901 ==> default: Creating shared folders metadata... 00:01:44.901 ==> default: Starting domain. 00:01:46.803 ==> default: Waiting for domain to get an IP address... 00:02:01.730 ==> default: Waiting for SSH to become available... 00:02:03.107 ==> default: Configuring and enabling network interfaces... 00:02:07.364 default: SSH address: 192.168.121.66:22 00:02:07.364 default: SSH username: vagrant 00:02:07.364 default: SSH auth method: private key 00:02:09.303 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.418 ==> default: Mounting SSHFS shared folder... 00:02:18.355 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.356 ==> default: Checking Mount.. 00:02:19.731 ==> default: Folder Successfully Mounted! 00:02:19.731 ==> default: Running provisioner: file... 00:02:20.667 default: ~/.gitconfig => .gitconfig 00:02:20.927 00:02:20.927 SUCCESS! 00:02:20.927 00:02:20.927 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:20.927 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:20.927 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:20.927 00:02:20.936 [Pipeline] } 00:02:20.951 [Pipeline] // stage 00:02:20.960 [Pipeline] dir 00:02:20.960 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:20.962 [Pipeline] { 00:02:20.973 [Pipeline] catchError 00:02:20.975 [Pipeline] { 00:02:20.989 [Pipeline] sh 00:02:21.271 + vagrant ssh-config --host vagrant 00:02:21.271 + sed -ne /^Host/,$p 00:02:21.271 + tee ssh_conf 00:02:25.461 Host vagrant 00:02:25.461 HostName 192.168.121.66 00:02:25.461 User vagrant 00:02:25.461 Port 22 00:02:25.461 UserKnownHostsFile /dev/null 00:02:25.461 StrictHostKeyChecking no 00:02:25.461 PasswordAuthentication no 00:02:25.461 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:25.461 IdentitiesOnly yes 00:02:25.461 LogLevel FATAL 00:02:25.461 ForwardAgent yes 00:02:25.461 ForwardX11 yes 00:02:25.461 00:02:25.475 [Pipeline] withEnv 00:02:25.477 [Pipeline] { 00:02:25.491 [Pipeline] sh 00:02:25.773 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.773 source /etc/os-release 00:02:25.773 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.773 # Minimal, systemd-like check. 00:02:25.773 if [[ -e /.dockerenv ]]; then 00:02:25.773 # Clear garbage from the node's name: 00:02:25.773 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.773 # $HOSTNAME is the actual container id 00:02:25.773 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.773 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:25.773 # We can assume this is a mount from a host where container is running, 00:02:25.773 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.773 container="$(< /etc/hostname) ($agent)" 00:02:25.773 else 00:02:25.773 # Fallback 00:02:25.773 container=$agent 00:02:25.773 fi 00:02:25.773 fi 00:02:25.773 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.773 00:02:25.783 [Pipeline] } 00:02:25.799 [Pipeline] // withEnv 00:02:25.809 [Pipeline] setCustomBuildProperty 00:02:25.823 [Pipeline] stage 00:02:25.825 [Pipeline] { (Tests) 00:02:25.843 [Pipeline] sh 00:02:26.123 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:26.393 [Pipeline] sh 00:02:26.675 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.970 [Pipeline] timeout 00:02:26.970 Timeout set to expire in 1 hr 30 min 00:02:26.972 [Pipeline] { 00:02:26.986 [Pipeline] sh 00:02:27.267 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:27.834 HEAD is now at d0fd7ad59 lib/reduce: Add a chunk data read/write cache 00:02:27.846 [Pipeline] sh 00:02:28.126 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:28.399 [Pipeline] sh 00:02:28.680 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.955 [Pipeline] sh 00:02:29.235 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:29.493 ++ readlink -f spdk_repo 00:02:29.493 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:29.493 + [[ -n /home/vagrant/spdk_repo ]] 00:02:29.493 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:29.493 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:29.493 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:29.493 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:29.493 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:29.493 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:29.493 + cd /home/vagrant/spdk_repo 00:02:29.493 + source /etc/os-release 00:02:29.493 ++ NAME='Fedora Linux' 00:02:29.493 ++ VERSION='39 (Cloud Edition)' 00:02:29.493 ++ ID=fedora 00:02:29.493 ++ VERSION_ID=39 00:02:29.493 ++ VERSION_CODENAME= 00:02:29.493 ++ PLATFORM_ID=platform:f39 00:02:29.493 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:29.493 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:29.493 ++ LOGO=fedora-logo-icon 00:02:29.493 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:29.493 ++ HOME_URL=https://fedoraproject.org/ 00:02:29.493 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:29.493 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:29.493 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:29.493 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:29.493 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:29.493 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:29.493 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:29.493 ++ SUPPORT_END=2024-11-12 00:02:29.493 ++ VARIANT='Cloud Edition' 00:02:29.493 ++ VARIANT_ID=cloud 00:02:29.493 + uname -a 00:02:29.493 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:29.493 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:29.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:29.752 Hugepages 00:02:29.752 node hugesize free / total 00:02:29.752 node0 1048576kB 0 / 0 00:02:30.011 node0 2048kB 0 / 0 00:02:30.011 00:02:30.011 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:30.011 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:30.011 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:30.011 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:30.011 + rm -f /tmp/spdk-ld-path 00:02:30.011 + source autorun-spdk.conf 00:02:30.011 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.011 ++ SPDK_RUN_ASAN=1 00:02:30.011 ++ SPDK_RUN_UBSAN=1 00:02:30.011 ++ SPDK_TEST_RAID=1 00:02:30.011 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:30.011 ++ RUN_NIGHTLY=0 00:02:30.011 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:30.011 + [[ -n '' ]] 00:02:30.011 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:30.011 + for M in /var/spdk/build-*-manifest.txt 00:02:30.011 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:30.011 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.011 + for M in /var/spdk/build-*-manifest.txt 00:02:30.011 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:30.011 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.011 + for M in /var/spdk/build-*-manifest.txt 00:02:30.011 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:30.011 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.011 ++ uname 00:02:30.011 + [[ Linux == \L\i\n\u\x ]] 00:02:30.011 + sudo dmesg -T 00:02:30.011 + sudo dmesg --clear 00:02:30.011 + dmesg_pid=5206 00:02:30.011 + [[ Fedora Linux == FreeBSD ]] 00:02:30.011 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.011 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.011 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:30.011 + sudo dmesg -Tw 00:02:30.011 + [[ -x /usr/src/fio-static/fio ]] 00:02:30.011 + export FIO_BIN=/usr/src/fio-static/fio 00:02:30.011 + FIO_BIN=/usr/src/fio-static/fio 00:02:30.011 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:30.011 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:30.011 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:30.011 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.011 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.011 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:30.011 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.011 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.011 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:30.270 03:13:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:30.270 03:13:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:30.270 03:13:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.270 03:13:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:30.270 03:13:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:30.270 03:13:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:30.270 03:13:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:30.270 03:13:43 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:30.270 03:13:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:30.270 03:13:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:30.270 03:13:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:30.270 03:13:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:30.270 03:13:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:30.270 03:13:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:30.270 03:13:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.270 03:13:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.270 03:13:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.270 03:13:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.270 03:13:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.270 03:13:43 -- paths/export.sh@5 -- $ export PATH 00:02:30.270 03:13:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.270 03:13:43 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:30.270 03:13:43 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:30.270 03:13:43 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730776423.XXXXXX 00:02:30.270 03:13:43 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730776423.cG0IVO 00:02:30.270 03:13:43 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:30.270 03:13:43 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:30.270 03:13:43 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:30.270 03:13:43 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:30.270 03:13:43 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:30.270 03:13:43 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:30.270 03:13:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:30.270 03:13:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.270 03:13:43 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:30.270 03:13:43 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:30.270 03:13:43 -- pm/common@17 -- $ local monitor 00:02:30.270 03:13:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.270 03:13:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.270 03:13:43 -- pm/common@25 -- $ sleep 1 00:02:30.270 03:13:43 -- pm/common@21 -- $ date +%s 00:02:30.270 03:13:43 -- pm/common@21 -- $ date +%s 00:02:30.270 03:13:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730776423 00:02:30.270 03:13:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730776423 00:02:30.270 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730776423_collect-cpu-load.pm.log 00:02:30.270 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730776423_collect-vmstat.pm.log 00:02:31.208 03:13:44 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:31.208 03:13:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:31.208 03:13:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:31.208 03:13:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:31.208 03:13:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:31.208 Tue Nov 5 03:13:44 AM UTC 2024 00:02:31.208 03:13:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:31.208 v25.01-pre-125-gd0fd7ad59 00:02:31.208 03:13:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:31.208 03:13:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:31.208 03:13:44 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:31.208 03:13:44 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:31.208 03:13:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.208 ************************************ 00:02:31.208 START TEST asan 00:02:31.208 ************************************ 00:02:31.208 using asan 00:02:31.208 03:13:44 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:31.208 00:02:31.208 real 0m0.001s 00:02:31.208 user 0m0.000s 00:02:31.208 sys 0m0.001s 00:02:31.208 03:13:44 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:31.208 03:13:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:31.208 ************************************ 00:02:31.208 END TEST asan 00:02:31.208 ************************************ 00:02:31.208 03:13:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:31.208 03:13:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:31.208 03:13:44 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:31.208 03:13:44 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:31.208 03:13:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.208 ************************************ 00:02:31.208 START TEST ubsan 00:02:31.208 ************************************ 00:02:31.208 using ubsan 00:02:31.208 03:13:44 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:31.208 00:02:31.208 real 0m0.000s 00:02:31.208 user 0m0.000s 00:02:31.208 sys 0m0.000s 00:02:31.208 03:13:44 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:31.208 03:13:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:31.208 ************************************ 00:02:31.208 END TEST ubsan 00:02:31.208 ************************************ 00:02:31.466 03:13:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:31.466 03:13:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:31.466 03:13:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:31.466 03:13:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:31.466 03:13:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:31.466 03:13:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:31.466 03:13:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:31.466 03:13:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:31.466 03:13:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:31.466 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:31.466 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:32.034 Using 'verbs' RDMA provider 00:02:45.188 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:00.067 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:00.067 Creating mk/config.mk...done. 00:03:00.067 Creating mk/cc.flags.mk...done. 00:03:00.067 Type 'make' to build. 00:03:00.067 03:14:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:00.067 03:14:11 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:00.067 03:14:11 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:00.067 03:14:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.067 ************************************ 00:03:00.067 START TEST make 00:03:00.067 ************************************ 00:03:00.067 03:14:11 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:00.067 make[1]: Nothing to be done for 'all'. 00:03:12.290 The Meson build system 00:03:12.290 Version: 1.5.0 00:03:12.290 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:12.290 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:12.290 Build type: native build 00:03:12.290 Program cat found: YES (/usr/bin/cat) 00:03:12.290 Project name: DPDK 00:03:12.290 Project version: 24.03.0 00:03:12.290 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.290 C linker for the host machine: cc ld.bfd 2.40-14 00:03:12.290 Host machine cpu family: x86_64 00:03:12.290 Host machine cpu: x86_64 00:03:12.290 Message: ## Building in Developer Mode ## 00:03:12.290 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:12.290 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:12.290 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:12.290 Program python3 found: YES (/usr/bin/python3) 00:03:12.290 Program cat found: YES (/usr/bin/cat) 00:03:12.290 Compiler for C supports arguments -march=native: YES 00:03:12.290 Checking for size of "void *" : 8 00:03:12.290 Checking for size of "void *" : 8 (cached) 00:03:12.290 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:12.290 Library m found: YES 00:03:12.290 Library numa found: YES 00:03:12.290 Has header "numaif.h" : YES 00:03:12.290 Library fdt found: NO 00:03:12.290 Library execinfo found: NO 00:03:12.290 Has header "execinfo.h" : YES 00:03:12.290 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.290 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:12.290 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:12.290 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:12.290 Run-time dependency openssl found: YES 3.1.1 00:03:12.290 Run-time dependency libpcap found: YES 1.10.4 00:03:12.290 Has header "pcap.h" with dependency libpcap: YES 00:03:12.290 Compiler for C supports arguments -Wcast-qual: YES 00:03:12.290 Compiler for C supports arguments -Wdeprecated: YES 00:03:12.290 Compiler for C supports arguments -Wformat: YES 00:03:12.290 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:12.290 Compiler for C supports arguments -Wformat-security: NO 00:03:12.290 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.290 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:12.290 Compiler for C supports arguments -Wnested-externs: YES 00:03:12.290 Compiler for C supports arguments -Wold-style-definition: YES 00:03:12.290 Compiler for C supports arguments -Wpointer-arith: YES 00:03:12.290 Compiler for C supports arguments -Wsign-compare: YES 00:03:12.290 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:12.290 Compiler for C supports arguments -Wundef: YES 00:03:12.290 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.290 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:12.290 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:12.290 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.290 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:12.290 Program objdump found: YES (/usr/bin/objdump) 00:03:12.290 Compiler for C supports arguments -mavx512f: YES 00:03:12.290 Checking if "AVX512 checking" compiles: YES 00:03:12.290 Fetching value of define "__SSE4_2__" : 1 00:03:12.290 Fetching value of define "__AES__" : 1 00:03:12.290 Fetching value of define "__AVX__" : 1 00:03:12.290 Fetching value of define "__AVX2__" : 1 00:03:12.290 Fetching value of define "__AVX512BW__" : (undefined) 00:03:12.290 Fetching value of define "__AVX512CD__" : (undefined) 00:03:12.290 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:12.290 Fetching value of define "__AVX512F__" : (undefined) 00:03:12.290 Fetching value of define "__AVX512VL__" : (undefined) 00:03:12.290 Fetching value of define "__PCLMUL__" : 1 00:03:12.290 Fetching value of define "__RDRND__" : 1 00:03:12.290 Fetching value of define "__RDSEED__" : 1 00:03:12.290 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:12.290 Fetching value of define "__znver1__" : (undefined) 00:03:12.290 Fetching value of define "__znver2__" : (undefined) 00:03:12.290 Fetching value of define "__znver3__" : (undefined) 00:03:12.290 Fetching value of define "__znver4__" : (undefined) 00:03:12.290 Library asan found: YES 00:03:12.290 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:12.290 Message: lib/log: Defining dependency "log" 00:03:12.291 Message: lib/kvargs: Defining dependency "kvargs" 00:03:12.291 Message: lib/telemetry: Defining dependency "telemetry" 00:03:12.291 Library rt found: YES 00:03:12.291 Checking for function "getentropy" : NO 00:03:12.291 Message: lib/eal: Defining dependency "eal" 00:03:12.291 Message: lib/ring: Defining dependency "ring" 00:03:12.291 Message: lib/rcu: Defining dependency "rcu" 00:03:12.291 Message: lib/mempool: Defining dependency "mempool" 00:03:12.291 Message: lib/mbuf: Defining dependency "mbuf" 00:03:12.291 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:12.291 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:12.291 Compiler for C supports arguments -mpclmul: YES 00:03:12.291 Compiler for C supports arguments -maes: YES 00:03:12.291 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:12.291 Compiler for C supports arguments -mavx512bw: YES 00:03:12.291 Compiler for C supports arguments -mavx512dq: YES 00:03:12.291 Compiler for C supports arguments -mavx512vl: YES 00:03:12.291 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:12.291 Compiler for C supports arguments -mavx2: YES 00:03:12.291 Compiler for C supports arguments -mavx: YES 00:03:12.291 Message: lib/net: Defining dependency "net" 00:03:12.291 Message: lib/meter: Defining dependency "meter" 00:03:12.291 Message: lib/ethdev: Defining dependency "ethdev" 00:03:12.291 Message: lib/pci: Defining dependency "pci" 00:03:12.291 Message: lib/cmdline: Defining dependency "cmdline" 00:03:12.291 Message: lib/hash: Defining dependency "hash" 00:03:12.291 Message: lib/timer: Defining dependency "timer" 00:03:12.291 Message: lib/compressdev: Defining dependency "compressdev" 00:03:12.291 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:12.291 Message: lib/dmadev: Defining dependency "dmadev" 00:03:12.291 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:12.291 Message: lib/power: Defining dependency "power" 00:03:12.291 Message: lib/reorder: Defining dependency "reorder" 00:03:12.291 Message: lib/security: Defining dependency "security" 00:03:12.291 Has header "linux/userfaultfd.h" : YES 00:03:12.291 Has header "linux/vduse.h" : YES 00:03:12.291 Message: lib/vhost: Defining dependency "vhost" 00:03:12.291 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:12.291 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:12.291 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:12.291 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:12.291 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:12.291 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:12.291 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:12.291 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:12.291 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:12.291 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:12.291 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:12.291 Configuring doxy-api-html.conf using configuration 00:03:12.291 Configuring doxy-api-man.conf using configuration 00:03:12.291 Program mandb found: YES (/usr/bin/mandb) 00:03:12.291 Program sphinx-build found: NO 00:03:12.291 Configuring rte_build_config.h using configuration 00:03:12.291 Message: 00:03:12.291 ================= 00:03:12.291 Applications Enabled 00:03:12.291 ================= 00:03:12.291 00:03:12.291 apps: 00:03:12.291 00:03:12.291 00:03:12.291 Message: 00:03:12.291 ================= 00:03:12.291 Libraries Enabled 00:03:12.291 ================= 00:03:12.291 00:03:12.291 libs: 00:03:12.291 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:12.291 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:12.291 cryptodev, dmadev, power, reorder, security, vhost, 00:03:12.291 00:03:12.291 Message: 00:03:12.291 =============== 00:03:12.291 Drivers Enabled 00:03:12.291 =============== 00:03:12.291 00:03:12.291 common: 00:03:12.291 00:03:12.291 bus: 00:03:12.291 pci, vdev, 00:03:12.291 mempool: 00:03:12.291 ring, 00:03:12.291 dma: 00:03:12.291 00:03:12.291 net: 00:03:12.291 00:03:12.291 crypto: 00:03:12.291 00:03:12.291 compress: 00:03:12.291 00:03:12.291 vdpa: 00:03:12.291 00:03:12.291 00:03:12.291 Message: 00:03:12.291 ================= 00:03:12.291 Content Skipped 00:03:12.291 ================= 00:03:12.291 00:03:12.291 apps: 00:03:12.291 dumpcap: explicitly disabled via build config 00:03:12.291 graph: explicitly disabled via build config 00:03:12.291 pdump: explicitly disabled via build config 00:03:12.291 proc-info: explicitly disabled via build config 00:03:12.291 test-acl: explicitly disabled via build config 00:03:12.291 test-bbdev: explicitly disabled via build config 00:03:12.291 test-cmdline: explicitly disabled via build config 00:03:12.291 test-compress-perf: explicitly disabled via build config 00:03:12.291 test-crypto-perf: explicitly disabled via build config 00:03:12.291 test-dma-perf: explicitly disabled via build config 00:03:12.291 test-eventdev: explicitly disabled via build config 00:03:12.291 test-fib: explicitly disabled via build config 00:03:12.291 test-flow-perf: explicitly disabled via build config 00:03:12.291 test-gpudev: explicitly disabled via build config 00:03:12.291 test-mldev: explicitly disabled via build config 00:03:12.291 test-pipeline: explicitly disabled via build config 00:03:12.291 test-pmd: explicitly disabled via build config 00:03:12.291 test-regex: explicitly disabled via build config 00:03:12.291 test-sad: explicitly disabled via build config 00:03:12.291 test-security-perf: explicitly disabled via build config 00:03:12.291 00:03:12.291 libs: 00:03:12.291 argparse: explicitly disabled via build config 00:03:12.291 metrics: explicitly disabled via build config 00:03:12.291 acl: explicitly disabled via build config 00:03:12.291 bbdev: explicitly disabled via build config 00:03:12.291 bitratestats: explicitly disabled via build config 00:03:12.291 bpf: explicitly disabled via build config 00:03:12.291 cfgfile: explicitly disabled via build config 00:03:12.291 distributor: explicitly disabled via build config 00:03:12.291 efd: explicitly disabled via build config 00:03:12.291 eventdev: explicitly disabled via build config 00:03:12.291 dispatcher: explicitly disabled via build config 00:03:12.291 gpudev: explicitly disabled via build config 00:03:12.291 gro: explicitly disabled via build config 00:03:12.291 gso: explicitly disabled via build config 00:03:12.291 ip_frag: explicitly disabled via build config 00:03:12.291 jobstats: explicitly disabled via build config 00:03:12.291 latencystats: explicitly disabled via build config 00:03:12.291 lpm: explicitly disabled via build config 00:03:12.291 member: explicitly disabled via build config 00:03:12.291 pcapng: explicitly disabled via build config 00:03:12.291 rawdev: explicitly disabled via build config 00:03:12.291 regexdev: explicitly disabled via build config 00:03:12.291 mldev: explicitly disabled via build config 00:03:12.291 rib: explicitly disabled via build config 00:03:12.291 sched: explicitly disabled via build config 00:03:12.291 stack: explicitly disabled via build config 00:03:12.291 ipsec: explicitly disabled via build config 00:03:12.291 pdcp: explicitly disabled via build config 00:03:12.291 fib: explicitly disabled via build config 00:03:12.291 port: explicitly disabled via build config 00:03:12.291 pdump: explicitly disabled via build config 00:03:12.291 table: explicitly disabled via build config 00:03:12.291 pipeline: explicitly disabled via build config 00:03:12.291 graph: explicitly disabled via build config 00:03:12.291 node: explicitly disabled via build config 00:03:12.291 00:03:12.291 drivers: 00:03:12.291 common/cpt: not in enabled drivers build config 00:03:12.291 common/dpaax: not in enabled drivers build config 00:03:12.291 common/iavf: not in enabled drivers build config 00:03:12.291 common/idpf: not in enabled drivers build config 00:03:12.291 common/ionic: not in enabled drivers build config 00:03:12.291 common/mvep: not in enabled drivers build config 00:03:12.291 common/octeontx: not in enabled drivers build config 00:03:12.291 bus/auxiliary: not in enabled drivers build config 00:03:12.291 bus/cdx: not in enabled drivers build config 00:03:12.291 bus/dpaa: not in enabled drivers build config 00:03:12.291 bus/fslmc: not in enabled drivers build config 00:03:12.291 bus/ifpga: not in enabled drivers build config 00:03:12.291 bus/platform: not in enabled drivers build config 00:03:12.291 bus/uacce: not in enabled drivers build config 00:03:12.291 bus/vmbus: not in enabled drivers build config 00:03:12.291 common/cnxk: not in enabled drivers build config 00:03:12.291 common/mlx5: not in enabled drivers build config 00:03:12.291 common/nfp: not in enabled drivers build config 00:03:12.291 common/nitrox: not in enabled drivers build config 00:03:12.291 common/qat: not in enabled drivers build config 00:03:12.291 common/sfc_efx: not in enabled drivers build config 00:03:12.291 mempool/bucket: not in enabled drivers build config 00:03:12.291 mempool/cnxk: not in enabled drivers build config 00:03:12.291 mempool/dpaa: not in enabled drivers build config 00:03:12.291 mempool/dpaa2: not in enabled drivers build config 00:03:12.291 mempool/octeontx: not in enabled drivers build config 00:03:12.291 mempool/stack: not in enabled drivers build config 00:03:12.291 dma/cnxk: not in enabled drivers build config 00:03:12.291 dma/dpaa: not in enabled drivers build config 00:03:12.291 dma/dpaa2: not in enabled drivers build config 00:03:12.292 dma/hisilicon: not in enabled drivers build config 00:03:12.292 dma/idxd: not in enabled drivers build config 00:03:12.292 dma/ioat: not in enabled drivers build config 00:03:12.292 dma/skeleton: not in enabled drivers build config 00:03:12.292 net/af_packet: not in enabled drivers build config 00:03:12.292 net/af_xdp: not in enabled drivers build config 00:03:12.292 net/ark: not in enabled drivers build config 00:03:12.292 net/atlantic: not in enabled drivers build config 00:03:12.292 net/avp: not in enabled drivers build config 00:03:12.292 net/axgbe: not in enabled drivers build config 00:03:12.292 net/bnx2x: not in enabled drivers build config 00:03:12.292 net/bnxt: not in enabled drivers build config 00:03:12.292 net/bonding: not in enabled drivers build config 00:03:12.292 net/cnxk: not in enabled drivers build config 00:03:12.292 net/cpfl: not in enabled drivers build config 00:03:12.292 net/cxgbe: not in enabled drivers build config 00:03:12.292 net/dpaa: not in enabled drivers build config 00:03:12.292 net/dpaa2: not in enabled drivers build config 00:03:12.292 net/e1000: not in enabled drivers build config 00:03:12.292 net/ena: not in enabled drivers build config 00:03:12.292 net/enetc: not in enabled drivers build config 00:03:12.292 net/enetfec: not in enabled drivers build config 00:03:12.292 net/enic: not in enabled drivers build config 00:03:12.292 net/failsafe: not in enabled drivers build config 00:03:12.292 net/fm10k: not in enabled drivers build config 00:03:12.292 net/gve: not in enabled drivers build config 00:03:12.292 net/hinic: not in enabled drivers build config 00:03:12.292 net/hns3: not in enabled drivers build config 00:03:12.292 net/i40e: not in enabled drivers build config 00:03:12.292 net/iavf: not in enabled drivers build config 00:03:12.292 net/ice: not in enabled drivers build config 00:03:12.292 net/idpf: not in enabled drivers build config 00:03:12.292 net/igc: not in enabled drivers build config 00:03:12.292 net/ionic: not in enabled drivers build config 00:03:12.292 net/ipn3ke: not in enabled drivers build config 00:03:12.292 net/ixgbe: not in enabled drivers build config 00:03:12.292 net/mana: not in enabled drivers build config 00:03:12.292 net/memif: not in enabled drivers build config 00:03:12.292 net/mlx4: not in enabled drivers build config 00:03:12.292 net/mlx5: not in enabled drivers build config 00:03:12.292 net/mvneta: not in enabled drivers build config 00:03:12.292 net/mvpp2: not in enabled drivers build config 00:03:12.292 net/netvsc: not in enabled drivers build config 00:03:12.292 net/nfb: not in enabled drivers build config 00:03:12.292 net/nfp: not in enabled drivers build config 00:03:12.292 net/ngbe: not in enabled drivers build config 00:03:12.292 net/null: not in enabled drivers build config 00:03:12.292 net/octeontx: not in enabled drivers build config 00:03:12.292 net/octeon_ep: not in enabled drivers build config 00:03:12.292 net/pcap: not in enabled drivers build config 00:03:12.292 net/pfe: not in enabled drivers build config 00:03:12.292 net/qede: not in enabled drivers build config 00:03:12.292 net/ring: not in enabled drivers build config 00:03:12.292 net/sfc: not in enabled drivers build config 00:03:12.292 net/softnic: not in enabled drivers build config 00:03:12.292 net/tap: not in enabled drivers build config 00:03:12.292 net/thunderx: not in enabled drivers build config 00:03:12.292 net/txgbe: not in enabled drivers build config 00:03:12.292 net/vdev_netvsc: not in enabled drivers build config 00:03:12.292 net/vhost: not in enabled drivers build config 00:03:12.292 net/virtio: not in enabled drivers build config 00:03:12.292 net/vmxnet3: not in enabled drivers build config 00:03:12.292 raw/*: missing internal dependency, "rawdev" 00:03:12.292 crypto/armv8: not in enabled drivers build config 00:03:12.292 crypto/bcmfs: not in enabled drivers build config 00:03:12.292 crypto/caam_jr: not in enabled drivers build config 00:03:12.292 crypto/ccp: not in enabled drivers build config 00:03:12.292 crypto/cnxk: not in enabled drivers build config 00:03:12.292 crypto/dpaa_sec: not in enabled drivers build config 00:03:12.292 crypto/dpaa2_sec: not in enabled drivers build config 00:03:12.292 crypto/ipsec_mb: not in enabled drivers build config 00:03:12.292 crypto/mlx5: not in enabled drivers build config 00:03:12.292 crypto/mvsam: not in enabled drivers build config 00:03:12.292 crypto/nitrox: not in enabled drivers build config 00:03:12.292 crypto/null: not in enabled drivers build config 00:03:12.292 crypto/octeontx: not in enabled drivers build config 00:03:12.292 crypto/openssl: not in enabled drivers build config 00:03:12.292 crypto/scheduler: not in enabled drivers build config 00:03:12.292 crypto/uadk: not in enabled drivers build config 00:03:12.292 crypto/virtio: not in enabled drivers build config 00:03:12.292 compress/isal: not in enabled drivers build config 00:03:12.292 compress/mlx5: not in enabled drivers build config 00:03:12.292 compress/nitrox: not in enabled drivers build config 00:03:12.292 compress/octeontx: not in enabled drivers build config 00:03:12.292 compress/zlib: not in enabled drivers build config 00:03:12.292 regex/*: missing internal dependency, "regexdev" 00:03:12.292 ml/*: missing internal dependency, "mldev" 00:03:12.292 vdpa/ifc: not in enabled drivers build config 00:03:12.292 vdpa/mlx5: not in enabled drivers build config 00:03:12.292 vdpa/nfp: not in enabled drivers build config 00:03:12.292 vdpa/sfc: not in enabled drivers build config 00:03:12.292 event/*: missing internal dependency, "eventdev" 00:03:12.292 baseband/*: missing internal dependency, "bbdev" 00:03:12.292 gpu/*: missing internal dependency, "gpudev" 00:03:12.292 00:03:12.292 00:03:12.292 Build targets in project: 85 00:03:12.292 00:03:12.292 DPDK 24.03.0 00:03:12.292 00:03:12.292 User defined options 00:03:12.292 buildtype : debug 00:03:12.292 default_library : shared 00:03:12.292 libdir : lib 00:03:12.292 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:12.292 b_sanitize : address 00:03:12.292 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:12.292 c_link_args : 00:03:12.292 cpu_instruction_set: native 00:03:12.292 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:12.292 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:12.292 enable_docs : false 00:03:12.292 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:12.292 enable_kmods : false 00:03:12.292 max_lcores : 128 00:03:12.292 tests : false 00:03:12.292 00:03:12.292 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.292 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:12.292 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:12.292 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:12.292 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:12.292 [4/268] Linking static target lib/librte_kvargs.a 00:03:12.292 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:12.292 [6/268] Linking static target lib/librte_log.a 00:03:12.860 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:12.860 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.860 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:12.860 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:12.860 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:13.119 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:13.119 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:13.119 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:13.119 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:13.378 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:13.378 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:13.378 [18/268] Linking static target lib/librte_telemetry.a 00:03:13.378 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.378 [20/268] Linking target lib/librte_log.so.24.1 00:03:13.636 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:13.636 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:13.895 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:13.895 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:13.895 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:14.153 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:14.153 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:14.153 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:14.153 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:14.153 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:14.153 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:14.412 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.412 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:14.412 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:14.412 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:14.669 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:14.669 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:14.927 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:14.927 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:14.927 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:14.927 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:14.927 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:15.186 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:15.186 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:15.186 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:15.186 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:15.444 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:15.444 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:15.444 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:15.702 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:15.960 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:15.960 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:16.217 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:16.217 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:16.217 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:16.217 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:16.217 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:16.515 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:16.515 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:16.783 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:16.783 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:16.783 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:17.042 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:17.042 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:17.042 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:17.301 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:17.301 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:17.301 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:17.301 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:17.559 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:17.559 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:17.817 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:17.817 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:17.817 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:17.817 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:18.076 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:18.076 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:18.076 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:18.076 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:18.076 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:18.076 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:18.076 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:18.334 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:18.592 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:18.592 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:18.592 [86/268] Linking static target lib/librte_ring.a 00:03:18.592 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:18.850 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:18.850 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:18.850 [90/268] Linking static target lib/librte_eal.a 00:03:19.109 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:19.109 [92/268] Linking static target lib/librte_mempool.a 00:03:19.109 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:19.368 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.368 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:19.368 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:19.368 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:19.368 [98/268] Linking static target lib/librte_rcu.a 00:03:19.368 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:19.368 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:19.935 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:19.936 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:19.936 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:19.936 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.936 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:19.936 [106/268] Linking static target lib/librte_mbuf.a 00:03:19.936 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:20.503 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:20.503 [109/268] Linking static target lib/librte_net.a 00:03:20.503 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.503 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:20.503 [112/268] Linking static target lib/librte_meter.a 00:03:20.503 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:20.762 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:20.762 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.762 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:20.762 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.021 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.021 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.279 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:21.279 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:21.846 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:21.846 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:21.846 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:21.846 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:21.846 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:21.846 [127/268] Linking static target lib/librte_pci.a 00:03:22.105 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:22.105 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:22.105 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:22.105 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:22.105 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:22.363 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:22.363 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:22.363 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.363 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:22.363 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:22.363 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:22.621 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:22.621 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:22.621 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:22.621 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:22.621 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:22.621 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:22.621 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:22.621 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:22.878 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:22.878 [148/268] Linking static target lib/librte_cmdline.a 00:03:23.136 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:23.136 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:23.394 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:23.394 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:23.394 [153/268] Linking static target lib/librte_timer.a 00:03:23.652 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:23.910 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:23.910 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:23.910 [157/268] Linking static target lib/librte_ethdev.a 00:03:23.910 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:23.910 [159/268] Linking static target lib/librte_compressdev.a 00:03:23.910 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:23.910 [161/268] Linking static target lib/librte_hash.a 00:03:24.168 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.168 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:24.168 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:24.168 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:24.733 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.733 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:24.733 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:24.733 [169/268] Linking static target lib/librte_dmadev.a 00:03:24.733 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:24.992 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:24.992 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.992 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:24.992 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:25.250 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.508 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:25.767 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:25.767 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.767 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:25.767 [180/268] Linking static target lib/librte_cryptodev.a 00:03:26.024 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:26.025 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:26.025 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:26.025 [184/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:26.282 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:26.282 [186/268] Linking static target lib/librte_power.a 00:03:26.282 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:26.282 [188/268] Linking static target lib/librte_reorder.a 00:03:26.540 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:26.540 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:26.798 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:26.798 [192/268] Linking static target lib/librte_security.a 00:03:26.798 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.056 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:27.313 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:27.572 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.572 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.572 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:27.831 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:28.090 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:28.349 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:28.349 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:28.349 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:28.349 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:28.349 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:28.349 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.917 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:28.917 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:28.917 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:28.917 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:28.917 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:29.175 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:29.434 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.434 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:29.434 [215/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:29.434 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.434 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:29.434 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:29.434 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.434 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.434 [221/268] Linking static target drivers/librte_bus_vdev.a 00:03:29.434 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:29.434 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:29.434 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:29.693 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:29.693 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.974 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.542 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.800 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:30.800 [230/268] Linking target lib/librte_eal.so.24.1 00:03:30.800 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:30.800 [232/268] Linking target lib/librte_ring.so.24.1 00:03:30.800 [233/268] Linking target lib/librte_meter.so.24.1 00:03:30.800 [234/268] Linking target lib/librte_timer.so.24.1 00:03:30.800 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:30.800 [236/268] Linking target lib/librte_pci.so.24.1 00:03:30.800 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:31.059 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:31.059 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:31.059 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:31.059 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:31.059 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:31.059 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:31.059 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:31.059 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:31.317 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:31.317 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:31.317 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:31.317 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:31.317 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:31.317 [251/268] Linking target lib/librte_net.so.24.1 00:03:31.317 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:31.317 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:31.317 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:31.576 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:31.576 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:31.576 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:31.576 [258/268] Linking target lib/librte_hash.so.24.1 00:03:31.576 [259/268] Linking target lib/librte_security.so.24.1 00:03:31.834 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:31.834 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.093 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:32.093 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:32.093 [264/268] Linking target lib/librte_power.so.24.1 00:03:35.378 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:35.378 [266/268] Linking static target lib/librte_vhost.a 00:03:36.754 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.754 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:36.754 INFO: autodetecting backend as ninja 00:03:36.754 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:58.683 CC lib/ut_mock/mock.o 00:03:58.683 CC lib/log/log.o 00:03:58.683 CC lib/log/log_flags.o 00:03:58.683 CC lib/log/log_deprecated.o 00:03:58.683 CC lib/ut/ut.o 00:03:58.683 LIB libspdk_ut_mock.a 00:03:58.683 LIB libspdk_ut.a 00:03:58.683 LIB libspdk_log.a 00:03:58.683 SO libspdk_ut_mock.so.6.0 00:03:58.683 SO libspdk_ut.so.2.0 00:03:58.683 SO libspdk_log.so.7.1 00:03:58.683 SYMLINK libspdk_ut_mock.so 00:03:58.683 SYMLINK libspdk_ut.so 00:03:58.683 SYMLINK libspdk_log.so 00:03:58.683 CC lib/ioat/ioat.o 00:03:58.683 CXX lib/trace_parser/trace.o 00:03:58.683 CC lib/dma/dma.o 00:03:58.683 CC lib/util/base64.o 00:03:58.683 CC lib/util/bit_array.o 00:03:58.683 CC lib/util/cpuset.o 00:03:58.683 CC lib/util/crc16.o 00:03:58.683 CC lib/util/crc32.o 00:03:58.683 CC lib/util/crc32c.o 00:03:58.683 CC lib/vfio_user/host/vfio_user_pci.o 00:03:58.683 CC lib/vfio_user/host/vfio_user.o 00:03:58.683 CC lib/util/crc32_ieee.o 00:03:58.683 CC lib/util/crc64.o 00:03:58.683 CC lib/util/dif.o 00:03:58.683 LIB libspdk_dma.a 00:03:58.683 CC lib/util/fd.o 00:03:58.683 SO libspdk_dma.so.5.0 00:03:58.683 CC lib/util/fd_group.o 00:03:58.683 CC lib/util/file.o 00:03:58.683 SYMLINK libspdk_dma.so 00:03:58.683 CC lib/util/hexlify.o 00:03:58.683 LIB libspdk_ioat.a 00:03:58.683 CC lib/util/iov.o 00:03:58.683 CC lib/util/math.o 00:03:58.683 SO libspdk_ioat.so.7.0 00:03:58.683 CC lib/util/net.o 00:03:58.683 LIB libspdk_vfio_user.a 00:03:58.683 SYMLINK libspdk_ioat.so 00:03:58.683 CC lib/util/pipe.o 00:03:58.942 SO libspdk_vfio_user.so.5.0 00:03:58.942 CC lib/util/strerror_tls.o 00:03:58.942 CC lib/util/string.o 00:03:58.942 SYMLINK libspdk_vfio_user.so 00:03:58.942 CC lib/util/uuid.o 00:03:58.942 CC lib/util/xor.o 00:03:58.942 CC lib/util/zipf.o 00:03:58.942 CC lib/util/md5.o 00:03:59.201 LIB libspdk_util.a 00:03:59.460 SO libspdk_util.so.10.0 00:03:59.460 LIB libspdk_trace_parser.a 00:03:59.460 SO libspdk_trace_parser.so.6.0 00:03:59.719 SYMLINK libspdk_util.so 00:03:59.719 SYMLINK libspdk_trace_parser.so 00:03:59.719 CC lib/rdma_utils/rdma_utils.o 00:03:59.719 CC lib/rdma_provider/common.o 00:03:59.719 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:59.719 CC lib/json/json_parse.o 00:03:59.719 CC lib/env_dpdk/env.o 00:03:59.719 CC lib/conf/conf.o 00:03:59.719 CC lib/json/json_util.o 00:03:59.719 CC lib/env_dpdk/memory.o 00:03:59.719 CC lib/idxd/idxd.o 00:03:59.719 CC lib/vmd/vmd.o 00:03:59.995 CC lib/json/json_write.o 00:03:59.995 LIB libspdk_rdma_provider.a 00:03:59.995 SO libspdk_rdma_provider.so.6.0 00:03:59.995 LIB libspdk_conf.a 00:03:59.995 CC lib/env_dpdk/pci.o 00:03:59.995 CC lib/vmd/led.o 00:03:59.995 SO libspdk_conf.so.6.0 00:04:00.266 LIB libspdk_rdma_utils.a 00:04:00.266 SYMLINK libspdk_rdma_provider.so 00:04:00.266 CC lib/idxd/idxd_user.o 00:04:00.266 SYMLINK libspdk_conf.so 00:04:00.266 SO libspdk_rdma_utils.so.1.0 00:04:00.266 CC lib/env_dpdk/init.o 00:04:00.266 SYMLINK libspdk_rdma_utils.so 00:04:00.266 CC lib/idxd/idxd_kernel.o 00:04:00.266 CC lib/env_dpdk/threads.o 00:04:00.266 LIB libspdk_json.a 00:04:00.266 SO libspdk_json.so.6.0 00:04:00.525 CC lib/env_dpdk/pci_ioat.o 00:04:00.525 CC lib/env_dpdk/pci_virtio.o 00:04:00.525 CC lib/env_dpdk/pci_vmd.o 00:04:00.525 SYMLINK libspdk_json.so 00:04:00.525 CC lib/env_dpdk/pci_idxd.o 00:04:00.525 CC lib/env_dpdk/pci_event.o 00:04:00.525 CC lib/env_dpdk/sigbus_handler.o 00:04:00.525 CC lib/env_dpdk/pci_dpdk.o 00:04:00.525 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.525 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.525 LIB libspdk_idxd.a 00:04:00.525 SO libspdk_idxd.so.12.1 00:04:00.785 SYMLINK libspdk_idxd.so 00:04:00.785 LIB libspdk_vmd.a 00:04:00.785 SO libspdk_vmd.so.6.0 00:04:00.785 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.785 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.785 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.785 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.785 SYMLINK libspdk_vmd.so 00:04:01.044 LIB libspdk_jsonrpc.a 00:04:01.044 SO libspdk_jsonrpc.so.6.0 00:04:01.302 SYMLINK libspdk_jsonrpc.so 00:04:01.560 CC lib/rpc/rpc.o 00:04:01.819 LIB libspdk_rpc.a 00:04:01.819 SO libspdk_rpc.so.6.0 00:04:01.819 SYMLINK libspdk_rpc.so 00:04:01.819 LIB libspdk_env_dpdk.a 00:04:02.077 SO libspdk_env_dpdk.so.15.1 00:04:02.077 CC lib/trace/trace.o 00:04:02.077 CC lib/trace/trace_flags.o 00:04:02.077 CC lib/trace/trace_rpc.o 00:04:02.077 CC lib/notify/notify.o 00:04:02.077 CC lib/notify/notify_rpc.o 00:04:02.077 CC lib/keyring/keyring_rpc.o 00:04:02.077 CC lib/keyring/keyring.o 00:04:02.077 SYMLINK libspdk_env_dpdk.so 00:04:02.336 LIB libspdk_notify.a 00:04:02.336 SO libspdk_notify.so.6.0 00:04:02.336 LIB libspdk_keyring.a 00:04:02.336 SYMLINK libspdk_notify.so 00:04:02.336 LIB libspdk_trace.a 00:04:02.336 SO libspdk_keyring.so.2.0 00:04:02.595 SO libspdk_trace.so.11.0 00:04:02.595 SYMLINK libspdk_keyring.so 00:04:02.595 SYMLINK libspdk_trace.so 00:04:02.854 CC lib/thread/iobuf.o 00:04:02.854 CC lib/thread/thread.o 00:04:02.854 CC lib/sock/sock.o 00:04:02.854 CC lib/sock/sock_rpc.o 00:04:03.421 LIB libspdk_sock.a 00:04:03.421 SO libspdk_sock.so.10.0 00:04:03.421 SYMLINK libspdk_sock.so 00:04:03.679 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:03.679 CC lib/nvme/nvme_ctrlr.o 00:04:03.679 CC lib/nvme/nvme_fabric.o 00:04:03.679 CC lib/nvme/nvme_ns.o 00:04:03.679 CC lib/nvme/nvme_ns_cmd.o 00:04:03.679 CC lib/nvme/nvme_pcie_common.o 00:04:03.679 CC lib/nvme/nvme.o 00:04:03.679 CC lib/nvme/nvme_pcie.o 00:04:03.679 CC lib/nvme/nvme_qpair.o 00:04:04.643 CC lib/nvme/nvme_quirks.o 00:04:04.643 CC lib/nvme/nvme_transport.o 00:04:04.643 CC lib/nvme/nvme_discovery.o 00:04:04.643 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.902 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.902 CC lib/nvme/nvme_tcp.o 00:04:04.902 CC lib/nvme/nvme_opal.o 00:04:04.902 LIB libspdk_thread.a 00:04:04.902 SO libspdk_thread.so.11.0 00:04:05.160 SYMLINK libspdk_thread.so 00:04:05.160 CC lib/nvme/nvme_io_msg.o 00:04:05.160 CC lib/accel/accel.o 00:04:05.160 CC lib/blob/blobstore.o 00:04:05.418 CC lib/blob/request.o 00:04:05.418 CC lib/nvme/nvme_poll_group.o 00:04:05.418 CC lib/accel/accel_rpc.o 00:04:05.678 CC lib/init/json_config.o 00:04:05.678 CC lib/init/subsystem.o 00:04:05.678 CC lib/init/subsystem_rpc.o 00:04:05.936 CC lib/virtio/virtio.o 00:04:05.936 CC lib/virtio/virtio_vhost_user.o 00:04:05.936 CC lib/virtio/virtio_vfio_user.o 00:04:05.936 CC lib/virtio/virtio_pci.o 00:04:05.936 CC lib/init/rpc.o 00:04:06.193 CC lib/blob/zeroes.o 00:04:06.193 LIB libspdk_init.a 00:04:06.193 CC lib/nvme/nvme_zns.o 00:04:06.193 SO libspdk_init.so.6.0 00:04:06.193 CC lib/blob/blob_bs_dev.o 00:04:06.193 SYMLINK libspdk_init.so 00:04:06.193 CC lib/nvme/nvme_stubs.o 00:04:06.451 LIB libspdk_virtio.a 00:04:06.451 SO libspdk_virtio.so.7.0 00:04:06.451 CC lib/event/app.o 00:04:06.451 CC lib/fsdev/fsdev.o 00:04:06.451 SYMLINK libspdk_virtio.so 00:04:06.451 CC lib/nvme/nvme_auth.o 00:04:06.709 CC lib/nvme/nvme_cuse.o 00:04:06.709 CC lib/accel/accel_sw.o 00:04:06.709 CC lib/nvme/nvme_rdma.o 00:04:06.709 CC lib/event/reactor.o 00:04:06.967 CC lib/event/log_rpc.o 00:04:06.967 CC lib/event/app_rpc.o 00:04:06.967 LIB libspdk_accel.a 00:04:06.967 CC lib/event/scheduler_static.o 00:04:06.967 SO libspdk_accel.so.16.0 00:04:07.225 CC lib/fsdev/fsdev_io.o 00:04:07.225 SYMLINK libspdk_accel.so 00:04:07.225 CC lib/fsdev/fsdev_rpc.o 00:04:07.483 CC lib/bdev/bdev.o 00:04:07.483 CC lib/bdev/bdev_rpc.o 00:04:07.483 LIB libspdk_event.a 00:04:07.483 CC lib/bdev/bdev_zone.o 00:04:07.483 CC lib/bdev/part.o 00:04:07.483 SO libspdk_event.so.14.0 00:04:07.483 SYMLINK libspdk_event.so 00:04:07.483 CC lib/bdev/scsi_nvme.o 00:04:07.483 LIB libspdk_fsdev.a 00:04:07.741 SO libspdk_fsdev.so.2.0 00:04:07.742 SYMLINK libspdk_fsdev.so 00:04:07.999 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:08.565 LIB libspdk_nvme.a 00:04:08.823 SO libspdk_nvme.so.14.1 00:04:08.823 LIB libspdk_fuse_dispatcher.a 00:04:08.823 SO libspdk_fuse_dispatcher.so.1.0 00:04:08.823 SYMLINK libspdk_fuse_dispatcher.so 00:04:09.081 SYMLINK libspdk_nvme.so 00:04:09.647 LIB libspdk_blob.a 00:04:09.904 SO libspdk_blob.so.11.0 00:04:09.904 SYMLINK libspdk_blob.so 00:04:10.163 CC lib/blobfs/blobfs.o 00:04:10.163 CC lib/lvol/lvol.o 00:04:10.163 CC lib/blobfs/tree.o 00:04:11.098 LIB libspdk_bdev.a 00:04:11.098 SO libspdk_bdev.so.17.0 00:04:11.098 SYMLINK libspdk_bdev.so 00:04:11.356 LIB libspdk_blobfs.a 00:04:11.356 SO libspdk_blobfs.so.10.0 00:04:11.356 CC lib/nvmf/ctrlr.o 00:04:11.356 CC lib/ftl/ftl_init.o 00:04:11.356 CC lib/nvmf/ctrlr_discovery.o 00:04:11.356 CC lib/ftl/ftl_layout.o 00:04:11.356 CC lib/ftl/ftl_core.o 00:04:11.356 CC lib/nbd/nbd.o 00:04:11.356 CC lib/scsi/dev.o 00:04:11.356 CC lib/ublk/ublk.o 00:04:11.356 LIB libspdk_lvol.a 00:04:11.356 SYMLINK libspdk_blobfs.so 00:04:11.356 CC lib/ublk/ublk_rpc.o 00:04:11.356 SO libspdk_lvol.so.10.0 00:04:11.615 SYMLINK libspdk_lvol.so 00:04:11.615 CC lib/ftl/ftl_debug.o 00:04:11.615 CC lib/ftl/ftl_io.o 00:04:11.615 CC lib/scsi/lun.o 00:04:11.615 CC lib/scsi/port.o 00:04:11.873 CC lib/ftl/ftl_sb.o 00:04:11.873 CC lib/scsi/scsi.o 00:04:11.873 CC lib/ftl/ftl_l2p.o 00:04:11.873 CC lib/ftl/ftl_l2p_flat.o 00:04:11.873 CC lib/nbd/nbd_rpc.o 00:04:11.873 CC lib/ftl/ftl_nv_cache.o 00:04:11.873 CC lib/ftl/ftl_band.o 00:04:12.131 CC lib/scsi/scsi_bdev.o 00:04:12.131 CC lib/scsi/scsi_pr.o 00:04:12.131 CC lib/nvmf/ctrlr_bdev.o 00:04:12.131 LIB libspdk_nbd.a 00:04:12.131 CC lib/nvmf/subsystem.o 00:04:12.131 CC lib/nvmf/nvmf.o 00:04:12.131 SO libspdk_nbd.so.7.0 00:04:12.131 SYMLINK libspdk_nbd.so 00:04:12.131 CC lib/nvmf/nvmf_rpc.o 00:04:12.389 LIB libspdk_ublk.a 00:04:12.389 SO libspdk_ublk.so.3.0 00:04:12.389 SYMLINK libspdk_ublk.so 00:04:12.389 CC lib/nvmf/transport.o 00:04:12.389 CC lib/nvmf/tcp.o 00:04:12.389 CC lib/ftl/ftl_band_ops.o 00:04:12.647 CC lib/scsi/scsi_rpc.o 00:04:12.905 CC lib/scsi/task.o 00:04:12.905 CC lib/ftl/ftl_writer.o 00:04:12.905 CC lib/nvmf/stubs.o 00:04:13.164 LIB libspdk_scsi.a 00:04:13.164 SO libspdk_scsi.so.9.0 00:04:13.164 CC lib/ftl/ftl_rq.o 00:04:13.164 CC lib/ftl/ftl_reloc.o 00:04:13.164 SYMLINK libspdk_scsi.so 00:04:13.164 CC lib/nvmf/mdns_server.o 00:04:13.164 CC lib/nvmf/rdma.o 00:04:13.422 CC lib/nvmf/auth.o 00:04:13.422 CC lib/ftl/ftl_l2p_cache.o 00:04:13.681 CC lib/iscsi/conn.o 00:04:13.681 CC lib/iscsi/init_grp.o 00:04:13.681 CC lib/iscsi/iscsi.o 00:04:13.681 CC lib/iscsi/param.o 00:04:13.681 CC lib/iscsi/portal_grp.o 00:04:13.681 CC lib/iscsi/tgt_node.o 00:04:13.954 CC lib/iscsi/iscsi_subsystem.o 00:04:13.954 CC lib/ftl/ftl_p2l.o 00:04:14.253 CC lib/iscsi/iscsi_rpc.o 00:04:14.253 CC lib/iscsi/task.o 00:04:14.253 CC lib/ftl/ftl_p2l_log.o 00:04:14.253 CC lib/ftl/mngt/ftl_mngt.o 00:04:14.512 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:14.512 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:14.512 CC lib/vhost/vhost.o 00:04:14.512 CC lib/vhost/vhost_rpc.o 00:04:14.512 CC lib/vhost/vhost_scsi.o 00:04:14.772 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:14.772 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:14.772 CC lib/vhost/vhost_blk.o 00:04:14.772 CC lib/vhost/rte_vhost_user.o 00:04:14.772 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:14.772 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:15.031 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:15.031 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:15.290 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:15.290 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:15.290 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:15.290 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:15.548 CC lib/ftl/utils/ftl_conf.o 00:04:15.548 CC lib/ftl/utils/ftl_md.o 00:04:15.548 LIB libspdk_iscsi.a 00:04:15.548 CC lib/ftl/utils/ftl_mempool.o 00:04:15.548 SO libspdk_iscsi.so.8.0 00:04:15.548 CC lib/ftl/utils/ftl_bitmap.o 00:04:15.807 CC lib/ftl/utils/ftl_property.o 00:04:15.807 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:15.807 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:15.807 SYMLINK libspdk_iscsi.so 00:04:15.807 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:15.807 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:15.807 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:15.807 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:16.065 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:16.065 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:16.065 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:16.065 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:16.065 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:16.065 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:16.065 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:16.065 CC lib/ftl/base/ftl_base_dev.o 00:04:16.065 LIB libspdk_vhost.a 00:04:16.324 SO libspdk_vhost.so.8.0 00:04:16.324 LIB libspdk_nvmf.a 00:04:16.324 CC lib/ftl/base/ftl_base_bdev.o 00:04:16.324 CC lib/ftl/ftl_trace.o 00:04:16.324 SYMLINK libspdk_vhost.so 00:04:16.324 SO libspdk_nvmf.so.20.0 00:04:16.582 LIB libspdk_ftl.a 00:04:16.582 SYMLINK libspdk_nvmf.so 00:04:16.840 SO libspdk_ftl.so.9.0 00:04:17.099 SYMLINK libspdk_ftl.so 00:04:17.357 CC module/env_dpdk/env_dpdk_rpc.o 00:04:17.616 CC module/accel/iaa/accel_iaa.o 00:04:17.616 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:17.616 CC module/accel/ioat/accel_ioat.o 00:04:17.616 CC module/accel/dsa/accel_dsa.o 00:04:17.616 CC module/blob/bdev/blob_bdev.o 00:04:17.616 CC module/keyring/file/keyring.o 00:04:17.616 CC module/accel/error/accel_error.o 00:04:17.616 CC module/sock/posix/posix.o 00:04:17.616 CC module/fsdev/aio/fsdev_aio.o 00:04:17.616 LIB libspdk_env_dpdk_rpc.a 00:04:17.616 SO libspdk_env_dpdk_rpc.so.6.0 00:04:17.616 SYMLINK libspdk_env_dpdk_rpc.so 00:04:17.616 CC module/keyring/file/keyring_rpc.o 00:04:17.616 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:17.874 LIB libspdk_scheduler_dynamic.a 00:04:17.874 CC module/accel/error/accel_error_rpc.o 00:04:17.874 CC module/accel/iaa/accel_iaa_rpc.o 00:04:17.874 SO libspdk_scheduler_dynamic.so.4.0 00:04:17.874 CC module/accel/ioat/accel_ioat_rpc.o 00:04:17.874 SYMLINK libspdk_scheduler_dynamic.so 00:04:17.874 LIB libspdk_blob_bdev.a 00:04:17.874 LIB libspdk_keyring_file.a 00:04:17.874 CC module/accel/dsa/accel_dsa_rpc.o 00:04:17.874 SO libspdk_blob_bdev.so.11.0 00:04:17.874 CC module/fsdev/aio/linux_aio_mgr.o 00:04:17.874 SO libspdk_keyring_file.so.2.0 00:04:17.874 LIB libspdk_accel_error.a 00:04:17.874 LIB libspdk_accel_iaa.a 00:04:17.874 LIB libspdk_accel_ioat.a 00:04:17.874 SO libspdk_accel_error.so.2.0 00:04:17.874 SO libspdk_accel_iaa.so.3.0 00:04:17.874 SYMLINK libspdk_blob_bdev.so 00:04:17.874 SO libspdk_accel_ioat.so.6.0 00:04:18.133 SYMLINK libspdk_keyring_file.so 00:04:18.133 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:18.133 SYMLINK libspdk_accel_ioat.so 00:04:18.133 SYMLINK libspdk_accel_error.so 00:04:18.133 LIB libspdk_accel_dsa.a 00:04:18.133 SYMLINK libspdk_accel_iaa.so 00:04:18.133 SO libspdk_accel_dsa.so.5.0 00:04:18.133 SYMLINK libspdk_accel_dsa.so 00:04:18.133 CC module/scheduler/gscheduler/gscheduler.o 00:04:18.133 CC module/keyring/linux/keyring.o 00:04:18.133 LIB libspdk_scheduler_dpdk_governor.a 00:04:18.391 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:18.391 CC module/bdev/gpt/gpt.o 00:04:18.391 CC module/bdev/error/vbdev_error.o 00:04:18.391 CC module/bdev/delay/vbdev_delay.o 00:04:18.391 CC module/blobfs/bdev/blobfs_bdev.o 00:04:18.391 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:18.391 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:18.391 LIB libspdk_scheduler_gscheduler.a 00:04:18.391 CC module/bdev/lvol/vbdev_lvol.o 00:04:18.391 SO libspdk_scheduler_gscheduler.so.4.0 00:04:18.391 CC module/keyring/linux/keyring_rpc.o 00:04:18.391 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.391 LIB libspdk_fsdev_aio.a 00:04:18.650 SO libspdk_fsdev_aio.so.1.0 00:04:18.650 LIB libspdk_sock_posix.a 00:04:18.650 LIB libspdk_keyring_linux.a 00:04:18.650 SO libspdk_sock_posix.so.6.0 00:04:18.650 CC module/bdev/gpt/vbdev_gpt.o 00:04:18.650 LIB libspdk_blobfs_bdev.a 00:04:18.650 SO libspdk_keyring_linux.so.1.0 00:04:18.650 SO libspdk_blobfs_bdev.so.6.0 00:04:18.650 SYMLINK libspdk_fsdev_aio.so 00:04:18.650 CC module/bdev/malloc/bdev_malloc.o 00:04:18.650 SYMLINK libspdk_sock_posix.so 00:04:18.650 CC module/bdev/error/vbdev_error_rpc.o 00:04:18.650 SYMLINK libspdk_keyring_linux.so 00:04:18.650 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:18.650 SYMLINK libspdk_blobfs_bdev.so 00:04:18.650 CC module/bdev/null/bdev_null.o 00:04:18.913 CC module/bdev/nvme/bdev_nvme.o 00:04:18.913 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:18.913 CC module/bdev/passthru/vbdev_passthru.o 00:04:18.913 CC module/bdev/raid/bdev_raid.o 00:04:18.913 LIB libspdk_bdev_error.a 00:04:18.913 LIB libspdk_bdev_gpt.a 00:04:18.913 SO libspdk_bdev_error.so.6.0 00:04:18.913 SO libspdk_bdev_gpt.so.6.0 00:04:18.913 SYMLINK libspdk_bdev_error.so 00:04:18.913 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:18.913 SYMLINK libspdk_bdev_gpt.so 00:04:18.913 CC module/bdev/raid/bdev_raid_rpc.o 00:04:18.913 CC module/bdev/nvme/nvme_rpc.o 00:04:18.913 LIB libspdk_bdev_delay.a 00:04:19.172 SO libspdk_bdev_delay.so.6.0 00:04:19.172 CC module/bdev/null/bdev_null_rpc.o 00:04:19.172 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:19.172 LIB libspdk_bdev_lvol.a 00:04:19.172 SYMLINK libspdk_bdev_delay.so 00:04:19.172 CC module/bdev/nvme/bdev_mdns_client.o 00:04:19.172 SO libspdk_bdev_lvol.so.6.0 00:04:19.172 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:19.172 SYMLINK libspdk_bdev_lvol.so 00:04:19.172 CC module/bdev/nvme/vbdev_opal.o 00:04:19.172 CC module/bdev/raid/bdev_raid_sb.o 00:04:19.172 LIB libspdk_bdev_null.a 00:04:19.172 LIB libspdk_bdev_malloc.a 00:04:19.431 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:19.431 SO libspdk_bdev_null.so.6.0 00:04:19.431 SO libspdk_bdev_malloc.so.6.0 00:04:19.431 SYMLINK libspdk_bdev_null.so 00:04:19.431 SYMLINK libspdk_bdev_malloc.so 00:04:19.431 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.431 LIB libspdk_bdev_passthru.a 00:04:19.431 SO libspdk_bdev_passthru.so.6.0 00:04:19.431 CC module/bdev/split/vbdev_split.o 00:04:19.431 SYMLINK libspdk_bdev_passthru.so 00:04:19.431 CC module/bdev/raid/raid0.o 00:04:19.431 CC module/bdev/raid/raid1.o 00:04:19.690 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:19.690 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:19.690 CC module/bdev/split/vbdev_split_rpc.o 00:04:19.690 CC module/bdev/aio/bdev_aio.o 00:04:19.690 CC module/bdev/aio/bdev_aio_rpc.o 00:04:19.690 LIB libspdk_bdev_split.a 00:04:19.949 SO libspdk_bdev_split.so.6.0 00:04:19.949 CC module/bdev/raid/concat.o 00:04:19.949 CC module/bdev/raid/raid5f.o 00:04:19.949 SYMLINK libspdk_bdev_split.so 00:04:19.949 LIB libspdk_bdev_zone_block.a 00:04:19.949 CC module/bdev/ftl/bdev_ftl.o 00:04:19.949 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:19.949 CC module/bdev/iscsi/bdev_iscsi.o 00:04:19.949 SO libspdk_bdev_zone_block.so.6.0 00:04:20.208 LIB libspdk_bdev_aio.a 00:04:20.208 SYMLINK libspdk_bdev_zone_block.so 00:04:20.208 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:20.208 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:20.208 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:20.208 SO libspdk_bdev_aio.so.6.0 00:04:20.208 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:20.208 SYMLINK libspdk_bdev_aio.so 00:04:20.468 LIB libspdk_bdev_ftl.a 00:04:20.468 SO libspdk_bdev_ftl.so.6.0 00:04:20.468 SYMLINK libspdk_bdev_ftl.so 00:04:20.468 LIB libspdk_bdev_iscsi.a 00:04:20.468 SO libspdk_bdev_iscsi.so.6.0 00:04:20.468 LIB libspdk_bdev_raid.a 00:04:20.468 SYMLINK libspdk_bdev_iscsi.so 00:04:20.727 SO libspdk_bdev_raid.so.6.0 00:04:20.727 SYMLINK libspdk_bdev_raid.so 00:04:20.727 LIB libspdk_bdev_virtio.a 00:04:20.986 SO libspdk_bdev_virtio.so.6.0 00:04:20.986 SYMLINK libspdk_bdev_virtio.so 00:04:22.363 LIB libspdk_bdev_nvme.a 00:04:22.363 SO libspdk_bdev_nvme.so.7.1 00:04:22.363 SYMLINK libspdk_bdev_nvme.so 00:04:22.930 CC module/event/subsystems/iobuf/iobuf.o 00:04:22.930 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:22.930 CC module/event/subsystems/keyring/keyring.o 00:04:22.930 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:22.930 CC module/event/subsystems/scheduler/scheduler.o 00:04:22.930 CC module/event/subsystems/vmd/vmd.o 00:04:22.930 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:22.930 CC module/event/subsystems/fsdev/fsdev.o 00:04:22.930 CC module/event/subsystems/sock/sock.o 00:04:22.930 LIB libspdk_event_vhost_blk.a 00:04:22.930 LIB libspdk_event_keyring.a 00:04:22.930 LIB libspdk_event_scheduler.a 00:04:22.930 LIB libspdk_event_vmd.a 00:04:22.930 LIB libspdk_event_fsdev.a 00:04:22.930 SO libspdk_event_vhost_blk.so.3.0 00:04:22.930 LIB libspdk_event_iobuf.a 00:04:22.931 SO libspdk_event_keyring.so.1.0 00:04:22.931 LIB libspdk_event_sock.a 00:04:22.931 SO libspdk_event_scheduler.so.4.0 00:04:22.931 SO libspdk_event_vmd.so.6.0 00:04:22.931 SO libspdk_event_fsdev.so.1.0 00:04:23.190 SO libspdk_event_iobuf.so.3.0 00:04:23.190 SO libspdk_event_sock.so.5.0 00:04:23.190 SYMLINK libspdk_event_vhost_blk.so 00:04:23.190 SYMLINK libspdk_event_keyring.so 00:04:23.190 SYMLINK libspdk_event_scheduler.so 00:04:23.190 SYMLINK libspdk_event_vmd.so 00:04:23.190 SYMLINK libspdk_event_fsdev.so 00:04:23.190 SYMLINK libspdk_event_sock.so 00:04:23.190 SYMLINK libspdk_event_iobuf.so 00:04:23.449 CC module/event/subsystems/accel/accel.o 00:04:23.708 LIB libspdk_event_accel.a 00:04:23.708 SO libspdk_event_accel.so.6.0 00:04:23.708 SYMLINK libspdk_event_accel.so 00:04:23.966 CC module/event/subsystems/bdev/bdev.o 00:04:24.224 LIB libspdk_event_bdev.a 00:04:24.224 SO libspdk_event_bdev.so.6.0 00:04:24.224 SYMLINK libspdk_event_bdev.so 00:04:24.483 CC module/event/subsystems/nbd/nbd.o 00:04:24.483 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:24.483 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:24.483 CC module/event/subsystems/scsi/scsi.o 00:04:24.483 CC module/event/subsystems/ublk/ublk.o 00:04:24.752 LIB libspdk_event_ublk.a 00:04:24.752 LIB libspdk_event_nbd.a 00:04:24.752 LIB libspdk_event_scsi.a 00:04:24.752 SO libspdk_event_ublk.so.3.0 00:04:24.752 SO libspdk_event_nbd.so.6.0 00:04:24.752 SO libspdk_event_scsi.so.6.0 00:04:24.752 SYMLINK libspdk_event_ublk.so 00:04:24.752 SYMLINK libspdk_event_nbd.so 00:04:24.752 SYMLINK libspdk_event_scsi.so 00:04:24.752 LIB libspdk_event_nvmf.a 00:04:25.016 SO libspdk_event_nvmf.so.6.0 00:04:25.016 SYMLINK libspdk_event_nvmf.so 00:04:25.016 CC module/event/subsystems/iscsi/iscsi.o 00:04:25.016 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:25.275 LIB libspdk_event_vhost_scsi.a 00:04:25.275 LIB libspdk_event_iscsi.a 00:04:25.275 SO libspdk_event_vhost_scsi.so.3.0 00:04:25.275 SO libspdk_event_iscsi.so.6.0 00:04:25.275 SYMLINK libspdk_event_vhost_scsi.so 00:04:25.534 SYMLINK libspdk_event_iscsi.so 00:04:25.534 SO libspdk.so.6.0 00:04:25.534 SYMLINK libspdk.so 00:04:25.793 CXX app/trace/trace.o 00:04:25.793 CC app/trace_record/trace_record.o 00:04:25.793 TEST_HEADER include/spdk/accel.h 00:04:25.793 TEST_HEADER include/spdk/accel_module.h 00:04:25.793 TEST_HEADER include/spdk/assert.h 00:04:25.793 TEST_HEADER include/spdk/barrier.h 00:04:25.793 TEST_HEADER include/spdk/base64.h 00:04:25.793 TEST_HEADER include/spdk/bdev.h 00:04:25.793 TEST_HEADER include/spdk/bdev_module.h 00:04:25.793 TEST_HEADER include/spdk/bdev_zone.h 00:04:25.793 TEST_HEADER include/spdk/bit_array.h 00:04:25.793 TEST_HEADER include/spdk/bit_pool.h 00:04:25.793 TEST_HEADER include/spdk/blob_bdev.h 00:04:25.793 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:25.793 TEST_HEADER include/spdk/blobfs.h 00:04:25.793 TEST_HEADER include/spdk/blob.h 00:04:25.793 TEST_HEADER include/spdk/conf.h 00:04:25.793 TEST_HEADER include/spdk/config.h 00:04:25.793 CC app/nvmf_tgt/nvmf_main.o 00:04:25.793 TEST_HEADER include/spdk/cpuset.h 00:04:25.793 TEST_HEADER include/spdk/crc16.h 00:04:25.793 TEST_HEADER include/spdk/crc32.h 00:04:25.793 TEST_HEADER include/spdk/crc64.h 00:04:25.793 TEST_HEADER include/spdk/dif.h 00:04:25.793 TEST_HEADER include/spdk/dma.h 00:04:25.793 TEST_HEADER include/spdk/endian.h 00:04:25.793 TEST_HEADER include/spdk/env_dpdk.h 00:04:25.793 TEST_HEADER include/spdk/env.h 00:04:26.052 TEST_HEADER include/spdk/event.h 00:04:26.052 CC examples/util/zipf/zipf.o 00:04:26.052 TEST_HEADER include/spdk/fd_group.h 00:04:26.052 CC examples/ioat/perf/perf.o 00:04:26.052 TEST_HEADER include/spdk/fd.h 00:04:26.052 TEST_HEADER include/spdk/file.h 00:04:26.052 TEST_HEADER include/spdk/fsdev.h 00:04:26.052 TEST_HEADER include/spdk/fsdev_module.h 00:04:26.052 CC test/thread/poller_perf/poller_perf.o 00:04:26.052 TEST_HEADER include/spdk/ftl.h 00:04:26.052 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:26.052 TEST_HEADER include/spdk/gpt_spec.h 00:04:26.052 TEST_HEADER include/spdk/hexlify.h 00:04:26.052 TEST_HEADER include/spdk/histogram_data.h 00:04:26.052 TEST_HEADER include/spdk/idxd.h 00:04:26.052 CC test/dma/test_dma/test_dma.o 00:04:26.052 TEST_HEADER include/spdk/idxd_spec.h 00:04:26.052 TEST_HEADER include/spdk/init.h 00:04:26.052 TEST_HEADER include/spdk/ioat.h 00:04:26.052 TEST_HEADER include/spdk/ioat_spec.h 00:04:26.052 TEST_HEADER include/spdk/iscsi_spec.h 00:04:26.052 TEST_HEADER include/spdk/json.h 00:04:26.052 CC test/app/bdev_svc/bdev_svc.o 00:04:26.052 TEST_HEADER include/spdk/jsonrpc.h 00:04:26.052 TEST_HEADER include/spdk/keyring.h 00:04:26.052 TEST_HEADER include/spdk/keyring_module.h 00:04:26.052 TEST_HEADER include/spdk/likely.h 00:04:26.052 TEST_HEADER include/spdk/log.h 00:04:26.052 TEST_HEADER include/spdk/lvol.h 00:04:26.052 TEST_HEADER include/spdk/md5.h 00:04:26.052 TEST_HEADER include/spdk/memory.h 00:04:26.052 TEST_HEADER include/spdk/mmio.h 00:04:26.052 TEST_HEADER include/spdk/nbd.h 00:04:26.052 TEST_HEADER include/spdk/net.h 00:04:26.052 TEST_HEADER include/spdk/notify.h 00:04:26.052 TEST_HEADER include/spdk/nvme.h 00:04:26.052 TEST_HEADER include/spdk/nvme_intel.h 00:04:26.052 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:26.052 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:26.052 TEST_HEADER include/spdk/nvme_spec.h 00:04:26.052 TEST_HEADER include/spdk/nvme_zns.h 00:04:26.052 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:26.052 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:26.052 TEST_HEADER include/spdk/nvmf.h 00:04:26.052 TEST_HEADER include/spdk/nvmf_spec.h 00:04:26.052 TEST_HEADER include/spdk/nvmf_transport.h 00:04:26.052 TEST_HEADER include/spdk/opal.h 00:04:26.052 TEST_HEADER include/spdk/opal_spec.h 00:04:26.052 TEST_HEADER include/spdk/pci_ids.h 00:04:26.052 TEST_HEADER include/spdk/pipe.h 00:04:26.052 TEST_HEADER include/spdk/queue.h 00:04:26.052 TEST_HEADER include/spdk/reduce.h 00:04:26.052 CC test/env/mem_callbacks/mem_callbacks.o 00:04:26.052 TEST_HEADER include/spdk/rpc.h 00:04:26.052 TEST_HEADER include/spdk/scheduler.h 00:04:26.052 TEST_HEADER include/spdk/scsi.h 00:04:26.052 TEST_HEADER include/spdk/scsi_spec.h 00:04:26.052 TEST_HEADER include/spdk/sock.h 00:04:26.052 TEST_HEADER include/spdk/stdinc.h 00:04:26.052 TEST_HEADER include/spdk/string.h 00:04:26.052 TEST_HEADER include/spdk/thread.h 00:04:26.052 TEST_HEADER include/spdk/trace.h 00:04:26.052 TEST_HEADER include/spdk/trace_parser.h 00:04:26.052 TEST_HEADER include/spdk/tree.h 00:04:26.052 TEST_HEADER include/spdk/ublk.h 00:04:26.052 TEST_HEADER include/spdk/util.h 00:04:26.052 TEST_HEADER include/spdk/uuid.h 00:04:26.052 TEST_HEADER include/spdk/version.h 00:04:26.052 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:26.052 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:26.052 LINK zipf 00:04:26.052 TEST_HEADER include/spdk/vhost.h 00:04:26.052 LINK spdk_trace_record 00:04:26.052 TEST_HEADER include/spdk/vmd.h 00:04:26.052 TEST_HEADER include/spdk/xor.h 00:04:26.052 TEST_HEADER include/spdk/zipf.h 00:04:26.052 LINK poller_perf 00:04:26.310 CXX test/cpp_headers/accel.o 00:04:26.310 LINK nvmf_tgt 00:04:26.310 LINK bdev_svc 00:04:26.310 LINK ioat_perf 00:04:26.310 LINK spdk_trace 00:04:26.310 CXX test/cpp_headers/accel_module.o 00:04:26.569 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:26.569 CC app/iscsi_tgt/iscsi_tgt.o 00:04:26.569 CC examples/ioat/verify/verify.o 00:04:26.569 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:26.569 CXX test/cpp_headers/assert.o 00:04:26.569 CC app/spdk_tgt/spdk_tgt.o 00:04:26.569 LINK test_dma 00:04:26.569 LINK interrupt_tgt 00:04:26.828 CC examples/thread/thread/thread_ex.o 00:04:26.828 LINK iscsi_tgt 00:04:26.828 LINK mem_callbacks 00:04:26.828 CC examples/sock/hello_world/hello_sock.o 00:04:26.828 CXX test/cpp_headers/barrier.o 00:04:26.828 LINK verify 00:04:26.828 LINK spdk_tgt 00:04:27.087 CC app/spdk_lspci/spdk_lspci.o 00:04:27.087 CXX test/cpp_headers/base64.o 00:04:27.087 CC app/spdk_nvme_perf/perf.o 00:04:27.087 LINK thread 00:04:27.087 CC test/env/vtophys/vtophys.o 00:04:27.087 LINK nvme_fuzz 00:04:27.087 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:27.087 LINK hello_sock 00:04:27.087 LINK spdk_lspci 00:04:27.087 CC examples/vmd/lsvmd/lsvmd.o 00:04:27.087 CXX test/cpp_headers/bdev.o 00:04:27.345 LINK vtophys 00:04:27.345 CC examples/vmd/led/led.o 00:04:27.345 CXX test/cpp_headers/bdev_module.o 00:04:27.345 CXX test/cpp_headers/bdev_zone.o 00:04:27.345 LINK env_dpdk_post_init 00:04:27.345 LINK lsvmd 00:04:27.345 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.345 LINK led 00:04:27.345 CXX test/cpp_headers/bit_array.o 00:04:27.604 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:27.604 CC app/spdk_nvme_identify/identify.o 00:04:27.604 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:27.604 CXX test/cpp_headers/bit_pool.o 00:04:27.604 CC test/env/memory/memory_ut.o 00:04:27.604 CC test/env/pci/pci_ut.o 00:04:27.604 CXX test/cpp_headers/blob_bdev.o 00:04:27.863 CC test/app/histogram_perf/histogram_perf.o 00:04:27.863 CC examples/idxd/perf/perf.o 00:04:27.863 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:27.863 CXX test/cpp_headers/blobfs_bdev.o 00:04:27.863 LINK histogram_perf 00:04:28.121 LINK vhost_fuzz 00:04:28.121 LINK spdk_nvme_perf 00:04:28.121 LINK pci_ut 00:04:28.121 CXX test/cpp_headers/blobfs.o 00:04:28.121 CC test/rpc_client/rpc_client_test.o 00:04:28.121 LINK hello_fsdev 00:04:28.380 LINK idxd_perf 00:04:28.380 CXX test/cpp_headers/blob.o 00:04:28.380 CC test/accel/dif/dif.o 00:04:28.380 LINK rpc_client_test 00:04:28.639 CXX test/cpp_headers/conf.o 00:04:28.639 CC test/blobfs/mkfs/mkfs.o 00:04:28.639 CC app/spdk_nvme_discover/discovery_aer.o 00:04:28.639 LINK spdk_nvme_identify 00:04:28.639 CC test/event/event_perf/event_perf.o 00:04:28.639 CC examples/accel/perf/accel_perf.o 00:04:28.639 CXX test/cpp_headers/config.o 00:04:28.898 CXX test/cpp_headers/cpuset.o 00:04:28.898 LINK mkfs 00:04:28.898 LINK event_perf 00:04:28.898 LINK spdk_nvme_discover 00:04:28.898 CC examples/blob/hello_world/hello_blob.o 00:04:28.898 CXX test/cpp_headers/crc16.o 00:04:29.157 CXX test/cpp_headers/crc32.o 00:04:29.157 CC examples/nvme/hello_world/hello_world.o 00:04:29.157 LINK memory_ut 00:04:29.157 CC test/event/reactor/reactor.o 00:04:29.157 CC app/spdk_top/spdk_top.o 00:04:29.157 LINK hello_blob 00:04:29.157 CXX test/cpp_headers/crc64.o 00:04:29.157 LINK reactor 00:04:29.157 CC examples/nvme/reconnect/reconnect.o 00:04:29.416 LINK hello_world 00:04:29.416 LINK dif 00:04:29.416 LINK accel_perf 00:04:29.416 CXX test/cpp_headers/dif.o 00:04:29.416 CC examples/blob/cli/blobcli.o 00:04:29.416 CC test/event/reactor_perf/reactor_perf.o 00:04:29.416 CC app/vhost/vhost.o 00:04:29.675 CC test/event/app_repeat/app_repeat.o 00:04:29.675 LINK iscsi_fuzz 00:04:29.675 CXX test/cpp_headers/dma.o 00:04:29.675 CC app/spdk_dd/spdk_dd.o 00:04:29.675 LINK reactor_perf 00:04:29.675 LINK reconnect 00:04:29.675 LINK vhost 00:04:29.675 LINK app_repeat 00:04:29.675 CC examples/bdev/hello_world/hello_bdev.o 00:04:29.938 CXX test/cpp_headers/endian.o 00:04:29.938 CC test/app/jsoncat/jsoncat.o 00:04:29.938 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:29.938 CXX test/cpp_headers/env_dpdk.o 00:04:29.938 CC test/event/scheduler/scheduler.o 00:04:30.204 LINK hello_bdev 00:04:30.204 CC test/app/stub/stub.o 00:04:30.204 LINK blobcli 00:04:30.204 LINK jsoncat 00:04:30.204 LINK spdk_dd 00:04:30.204 CC app/fio/nvme/fio_plugin.o 00:04:30.204 CXX test/cpp_headers/env.o 00:04:30.204 LINK spdk_top 00:04:30.204 LINK scheduler 00:04:30.204 CXX test/cpp_headers/event.o 00:04:30.204 LINK stub 00:04:30.463 CXX test/cpp_headers/fd_group.o 00:04:30.463 CXX test/cpp_headers/fd.o 00:04:30.463 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.463 CXX test/cpp_headers/file.o 00:04:30.463 CC app/fio/bdev/fio_plugin.o 00:04:30.463 CXX test/cpp_headers/fsdev.o 00:04:30.463 CXX test/cpp_headers/fsdev_module.o 00:04:30.721 LINK nvme_manage 00:04:30.721 CC test/lvol/esnap/esnap.o 00:04:30.721 CC test/nvme/aer/aer.o 00:04:30.721 CC examples/nvme/arbitration/arbitration.o 00:04:30.721 CXX test/cpp_headers/ftl.o 00:04:30.721 CC examples/nvme/hotplug/hotplug.o 00:04:30.721 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:30.721 CXX test/cpp_headers/fuse_dispatcher.o 00:04:30.980 LINK spdk_nvme 00:04:30.980 LINK cmb_copy 00:04:30.980 CXX test/cpp_headers/gpt_spec.o 00:04:30.980 LINK aer 00:04:30.980 CC examples/nvme/abort/abort.o 00:04:30.980 LINK hotplug 00:04:31.238 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:31.238 LINK arbitration 00:04:31.238 LINK spdk_bdev 00:04:31.238 CXX test/cpp_headers/hexlify.o 00:04:31.238 CC test/nvme/reset/reset.o 00:04:31.238 LINK pmr_persistence 00:04:31.238 CC test/nvme/sgl/sgl.o 00:04:31.497 CC test/nvme/e2edp/nvme_dp.o 00:04:31.497 CXX test/cpp_headers/histogram_data.o 00:04:31.497 CC test/nvme/overhead/overhead.o 00:04:31.497 CXX test/cpp_headers/idxd.o 00:04:31.497 LINK bdevperf 00:04:31.497 LINK abort 00:04:31.497 CC test/bdev/bdevio/bdevio.o 00:04:31.756 LINK reset 00:04:31.756 LINK sgl 00:04:31.756 CC test/nvme/err_injection/err_injection.o 00:04:31.756 CXX test/cpp_headers/idxd_spec.o 00:04:31.756 LINK overhead 00:04:31.756 LINK nvme_dp 00:04:31.756 CC test/nvme/startup/startup.o 00:04:32.015 CC test/nvme/reserve/reserve.o 00:04:32.015 CXX test/cpp_headers/init.o 00:04:32.015 LINK err_injection 00:04:32.015 CC test/nvme/simple_copy/simple_copy.o 00:04:32.015 CXX test/cpp_headers/ioat.o 00:04:32.015 CC examples/nvmf/nvmf/nvmf.o 00:04:32.015 LINK startup 00:04:32.015 LINK bdevio 00:04:32.015 CC test/nvme/connect_stress/connect_stress.o 00:04:32.274 CXX test/cpp_headers/ioat_spec.o 00:04:32.274 LINK reserve 00:04:32.274 CC test/nvme/compliance/nvme_compliance.o 00:04:32.274 CC test/nvme/boot_partition/boot_partition.o 00:04:32.274 LINK simple_copy 00:04:32.274 LINK connect_stress 00:04:32.274 CC test/nvme/fused_ordering/fused_ordering.o 00:04:32.274 LINK nvmf 00:04:32.274 CXX test/cpp_headers/iscsi_spec.o 00:04:32.274 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:32.533 LINK boot_partition 00:04:32.533 CC test/nvme/fdp/fdp.o 00:04:32.533 CXX test/cpp_headers/json.o 00:04:32.533 LINK fused_ordering 00:04:32.533 CXX test/cpp_headers/jsonrpc.o 00:04:32.533 CXX test/cpp_headers/keyring.o 00:04:32.533 CXX test/cpp_headers/keyring_module.o 00:04:32.533 CC test/nvme/cuse/cuse.o 00:04:32.533 LINK doorbell_aers 00:04:32.533 LINK nvme_compliance 00:04:32.792 CXX test/cpp_headers/likely.o 00:04:32.792 CXX test/cpp_headers/log.o 00:04:32.792 CXX test/cpp_headers/lvol.o 00:04:32.792 CXX test/cpp_headers/md5.o 00:04:32.792 CXX test/cpp_headers/memory.o 00:04:32.792 CXX test/cpp_headers/mmio.o 00:04:32.792 CXX test/cpp_headers/nbd.o 00:04:32.792 CXX test/cpp_headers/net.o 00:04:32.792 LINK fdp 00:04:32.792 CXX test/cpp_headers/notify.o 00:04:32.792 CXX test/cpp_headers/nvme.o 00:04:32.792 CXX test/cpp_headers/nvme_intel.o 00:04:33.050 CXX test/cpp_headers/nvme_ocssd.o 00:04:33.050 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:33.050 CXX test/cpp_headers/nvme_spec.o 00:04:33.050 CXX test/cpp_headers/nvme_zns.o 00:04:33.050 CXX test/cpp_headers/nvmf_cmd.o 00:04:33.050 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:33.050 CXX test/cpp_headers/nvmf.o 00:04:33.050 CXX test/cpp_headers/nvmf_spec.o 00:04:33.050 CXX test/cpp_headers/nvmf_transport.o 00:04:33.309 CXX test/cpp_headers/opal.o 00:04:33.309 CXX test/cpp_headers/opal_spec.o 00:04:33.309 CXX test/cpp_headers/pci_ids.o 00:04:33.309 CXX test/cpp_headers/pipe.o 00:04:33.309 CXX test/cpp_headers/queue.o 00:04:33.309 CXX test/cpp_headers/reduce.o 00:04:33.309 CXX test/cpp_headers/rpc.o 00:04:33.309 CXX test/cpp_headers/scheduler.o 00:04:33.309 CXX test/cpp_headers/scsi.o 00:04:33.309 CXX test/cpp_headers/scsi_spec.o 00:04:33.309 CXX test/cpp_headers/sock.o 00:04:33.309 CXX test/cpp_headers/stdinc.o 00:04:33.567 CXX test/cpp_headers/string.o 00:04:33.567 CXX test/cpp_headers/thread.o 00:04:33.567 CXX test/cpp_headers/trace.o 00:04:33.567 CXX test/cpp_headers/trace_parser.o 00:04:33.567 CXX test/cpp_headers/tree.o 00:04:33.567 CXX test/cpp_headers/ublk.o 00:04:33.567 CXX test/cpp_headers/util.o 00:04:33.567 CXX test/cpp_headers/uuid.o 00:04:33.567 CXX test/cpp_headers/version.o 00:04:33.567 CXX test/cpp_headers/vfio_user_pci.o 00:04:33.567 CXX test/cpp_headers/vfio_user_spec.o 00:04:33.567 CXX test/cpp_headers/vhost.o 00:04:33.567 CXX test/cpp_headers/vmd.o 00:04:33.567 CXX test/cpp_headers/xor.o 00:04:33.826 CXX test/cpp_headers/zipf.o 00:04:34.084 LINK cuse 00:04:37.367 LINK esnap 00:04:37.625 00:04:37.625 real 1m39.232s 00:04:37.625 user 9m13.567s 00:04:37.625 sys 1m44.225s 00:04:37.625 03:15:51 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:37.625 03:15:51 make -- common/autotest_common.sh@10 -- $ set +x 00:04:37.625 ************************************ 00:04:37.625 END TEST make 00:04:37.625 ************************************ 00:04:37.625 03:15:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:37.625 03:15:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:37.625 03:15:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:37.625 03:15:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.625 03:15:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:37.625 03:15:51 -- pm/common@44 -- $ pid=5248 00:04:37.625 03:15:51 -- pm/common@50 -- $ kill -TERM 5248 00:04:37.625 03:15:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.625 03:15:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:37.625 03:15:51 -- pm/common@44 -- $ pid=5250 00:04:37.625 03:15:51 -- pm/common@50 -- $ kill -TERM 5250 00:04:37.625 03:15:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:37.625 03:15:51 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:37.625 03:15:51 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.625 03:15:51 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.625 03:15:51 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.625 03:15:51 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.625 03:15:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.625 03:15:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.625 03:15:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.625 03:15:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.625 03:15:51 -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.625 03:15:51 -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.625 03:15:51 -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.625 03:15:51 -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.625 03:15:51 -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.625 03:15:51 -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.625 03:15:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.626 03:15:51 -- scripts/common.sh@344 -- # case "$op" in 00:04:37.626 03:15:51 -- scripts/common.sh@345 -- # : 1 00:04:37.626 03:15:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.626 03:15:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.626 03:15:51 -- scripts/common.sh@365 -- # decimal 1 00:04:37.626 03:15:51 -- scripts/common.sh@353 -- # local d=1 00:04:37.626 03:15:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.626 03:15:51 -- scripts/common.sh@355 -- # echo 1 00:04:37.626 03:15:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.626 03:15:51 -- scripts/common.sh@366 -- # decimal 2 00:04:37.626 03:15:51 -- scripts/common.sh@353 -- # local d=2 00:04:37.626 03:15:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.626 03:15:51 -- scripts/common.sh@355 -- # echo 2 00:04:37.626 03:15:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.626 03:15:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.626 03:15:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.626 03:15:51 -- scripts/common.sh@368 -- # return 0 00:04:37.626 03:15:51 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.626 03:15:51 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 03:15:51 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 03:15:51 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 03:15:51 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 03:15:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.626 03:15:51 -- nvmf/common.sh@7 -- # uname -s 00:04:37.626 03:15:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.626 03:15:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.626 03:15:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.626 03:15:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.626 03:15:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.626 03:15:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.626 03:15:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.626 03:15:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.626 03:15:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.626 03:15:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.884 03:15:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:714dfb86-ac37-497f-90fb-9f62239d38c2 00:04:37.884 03:15:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=714dfb86-ac37-497f-90fb-9f62239d38c2 00:04:37.884 03:15:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.884 03:15:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.884 03:15:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.884 03:15:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.884 03:15:51 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.884 03:15:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.884 03:15:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.884 03:15:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.884 03:15:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.884 03:15:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.884 03:15:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.884 03:15:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.884 03:15:51 -- paths/export.sh@5 -- # export PATH 00:04:37.885 03:15:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.885 03:15:51 -- nvmf/common.sh@51 -- # : 0 00:04:37.885 03:15:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.885 03:15:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.885 03:15:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.885 03:15:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.885 03:15:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.885 03:15:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.885 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.885 03:15:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.885 03:15:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.885 03:15:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.885 03:15:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:37.885 03:15:51 -- spdk/autotest.sh@32 -- # uname -s 00:04:37.885 03:15:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:37.885 03:15:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:37.885 03:15:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.885 03:15:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:37.885 03:15:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.885 03:15:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:37.885 03:15:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:37.885 03:15:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:37.885 03:15:51 -- spdk/autotest.sh@48 -- # udevadm_pid=54314 00:04:37.885 03:15:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:37.885 03:15:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:37.885 03:15:51 -- pm/common@17 -- # local monitor 00:04:37.885 03:15:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.885 03:15:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.885 03:15:51 -- pm/common@25 -- # sleep 1 00:04:37.885 03:15:51 -- pm/common@21 -- # date +%s 00:04:37.885 03:15:51 -- pm/common@21 -- # date +%s 00:04:37.885 03:15:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730776551 00:04:37.885 03:15:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730776551 00:04:37.885 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730776551_collect-cpu-load.pm.log 00:04:37.885 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730776551_collect-vmstat.pm.log 00:04:38.822 03:15:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:38.822 03:15:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:38.822 03:15:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:38.822 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:04:38.822 03:15:52 -- spdk/autotest.sh@59 -- # create_test_list 00:04:38.822 03:15:52 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:38.822 03:15:52 -- common/autotest_common.sh@10 -- # set +x 00:04:38.822 03:15:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:38.822 03:15:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:38.822 03:15:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:38.822 03:15:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:38.822 03:15:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:38.822 03:15:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:38.822 03:15:52 -- common/autotest_common.sh@1455 -- # uname 00:04:38.822 03:15:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:38.822 03:15:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:38.822 03:15:52 -- common/autotest_common.sh@1475 -- # uname 00:04:38.822 03:15:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:38.822 03:15:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:38.822 03:15:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:39.080 lcov: LCOV version 1.15 00:04:39.080 03:15:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:53.962 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:53.962 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:08.958 03:16:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:08.958 03:16:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.958 03:16:21 -- common/autotest_common.sh@10 -- # set +x 00:05:08.958 03:16:21 -- spdk/autotest.sh@78 -- # rm -f 00:05:08.958 03:16:21 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.958 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:08.958 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:08.958 03:16:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:08.958 03:16:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:08.958 03:16:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:08.958 03:16:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:08.958 03:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.958 03:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:08.958 03:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:08.958 03:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.958 03:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.958 03:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.958 03:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:08.958 03:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:08.958 03:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:08.958 03:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.958 03:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.958 03:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:08.958 03:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:08.958 03:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:08.958 03:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.958 03:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.958 03:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:08.958 03:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:08.958 03:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:08.958 03:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.958 03:16:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:08.958 03:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.958 03:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.958 03:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:08.958 03:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:08.958 03:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:08.958 No valid GPT data, bailing 00:05:08.958 03:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.217 03:16:22 -- scripts/common.sh@394 -- # pt= 00:05:09.217 03:16:22 -- scripts/common.sh@395 -- # return 1 00:05:09.217 03:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:09.217 1+0 records in 00:05:09.217 1+0 records out 00:05:09.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499477 s, 210 MB/s 00:05:09.217 03:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.217 03:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.217 03:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:09.217 03:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:09.217 03:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:09.217 No valid GPT data, bailing 00:05:09.217 03:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:09.217 03:16:22 -- scripts/common.sh@394 -- # pt= 00:05:09.217 03:16:22 -- scripts/common.sh@395 -- # return 1 00:05:09.217 03:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:09.217 1+0 records in 00:05:09.217 1+0 records out 00:05:09.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00362697 s, 289 MB/s 00:05:09.217 03:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.217 03:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.217 03:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:09.217 03:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:09.217 03:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:09.217 No valid GPT data, bailing 00:05:09.217 03:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:09.217 03:16:22 -- scripts/common.sh@394 -- # pt= 00:05:09.217 03:16:22 -- scripts/common.sh@395 -- # return 1 00:05:09.217 03:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:09.217 1+0 records in 00:05:09.217 1+0 records out 00:05:09.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00366505 s, 286 MB/s 00:05:09.217 03:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.217 03:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.217 03:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:09.217 03:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:09.217 03:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:09.217 No valid GPT data, bailing 00:05:09.217 03:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:09.217 03:16:22 -- scripts/common.sh@394 -- # pt= 00:05:09.217 03:16:22 -- scripts/common.sh@395 -- # return 1 00:05:09.217 03:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:09.476 1+0 records in 00:05:09.476 1+0 records out 00:05:09.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429266 s, 244 MB/s 00:05:09.476 03:16:22 -- spdk/autotest.sh@105 -- # sync 00:05:09.476 03:16:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:09.476 03:16:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:09.476 03:16:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:11.380 03:16:24 -- spdk/autotest.sh@111 -- # uname -s 00:05:11.380 03:16:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:11.380 03:16:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:11.380 03:16:24 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.950 Hugepages 00:05:11.950 node hugesize free / total 00:05:12.209 node0 1048576kB 0 / 0 00:05:12.209 node0 2048kB 0 / 0 00:05:12.209 00:05:12.209 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.209 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:12.209 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:12.209 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:12.209 03:16:25 -- spdk/autotest.sh@117 -- # uname -s 00:05:12.209 03:16:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:12.209 03:16:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:12.209 03:16:25 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.145 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.145 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.145 03:16:26 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:14.082 03:16:27 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:14.082 03:16:27 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:14.082 03:16:27 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.082 03:16:27 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:14.082 03:16:27 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:14.082 03:16:27 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:14.082 03:16:27 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.082 03:16:27 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:14.082 03:16:27 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:14.341 03:16:27 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:14.341 03:16:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:14.341 03:16:27 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.600 Waiting for block devices as requested 00:05:14.600 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:14.859 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:14.859 03:16:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:14.859 03:16:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:14.859 03:16:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:14.859 03:16:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:14.859 03:16:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:14.859 03:16:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:14.859 03:16:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:14.859 03:16:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:14.859 03:16:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:14.859 03:16:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:14.859 03:16:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:14.859 03:16:28 -- common/autotest_common.sh@1541 -- # continue 00:05:14.859 03:16:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:14.859 03:16:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:14.859 03:16:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:14.859 03:16:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:14.859 03:16:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:14.859 03:16:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:14.859 03:16:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:14.859 03:16:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:14.859 03:16:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:14.859 03:16:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:14.859 03:16:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:14.859 03:16:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:14.860 03:16:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:14.860 03:16:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:14.860 03:16:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:14.860 03:16:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:14.860 03:16:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:14.860 03:16:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:14.860 03:16:28 -- common/autotest_common.sh@1541 -- # continue 00:05:14.860 03:16:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:14.860 03:16:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.860 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.860 03:16:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:14.860 03:16:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.860 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.860 03:16:28 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.688 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.688 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.688 03:16:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:15.688 03:16:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.688 03:16:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.688 03:16:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:15.689 03:16:29 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:15.689 03:16:29 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:15.689 03:16:29 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:15.689 03:16:29 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:15.689 03:16:29 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:15.689 03:16:29 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:15.689 03:16:29 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:15.948 03:16:29 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:15.948 03:16:29 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:15.948 03:16:29 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:15.948 03:16:29 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:15.948 03:16:29 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:15.948 03:16:29 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:15.948 03:16:29 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:15.948 03:16:29 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:15.948 03:16:29 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:15.948 03:16:29 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:15.948 03:16:29 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:15.948 03:16:29 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:15.948 03:16:29 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:15.948 03:16:29 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:15.948 03:16:29 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:15.948 03:16:29 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:15.948 03:16:29 -- common/autotest_common.sh@1570 -- # return 0 00:05:15.948 03:16:29 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:15.948 03:16:29 -- common/autotest_common.sh@1578 -- # return 0 00:05:15.948 03:16:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:15.948 03:16:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:15.948 03:16:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:15.948 03:16:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:15.948 03:16:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:15.948 03:16:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.948 03:16:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.948 03:16:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:15.948 03:16:29 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:15.948 03:16:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.948 03:16:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.948 03:16:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.948 ************************************ 00:05:15.948 START TEST env 00:05:15.948 ************************************ 00:05:15.948 03:16:29 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:15.948 * Looking for test storage... 00:05:15.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:15.948 03:16:29 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.948 03:16:29 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.948 03:16:29 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.207 03:16:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.207 03:16:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.207 03:16:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.207 03:16:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.207 03:16:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.207 03:16:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.207 03:16:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.207 03:16:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.207 03:16:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.207 03:16:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.207 03:16:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.207 03:16:29 env -- scripts/common.sh@344 -- # case "$op" in 00:05:16.207 03:16:29 env -- scripts/common.sh@345 -- # : 1 00:05:16.207 03:16:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.207 03:16:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.207 03:16:29 env -- scripts/common.sh@365 -- # decimal 1 00:05:16.207 03:16:29 env -- scripts/common.sh@353 -- # local d=1 00:05:16.207 03:16:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.207 03:16:29 env -- scripts/common.sh@355 -- # echo 1 00:05:16.207 03:16:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.207 03:16:29 env -- scripts/common.sh@366 -- # decimal 2 00:05:16.207 03:16:29 env -- scripts/common.sh@353 -- # local d=2 00:05:16.207 03:16:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.207 03:16:29 env -- scripts/common.sh@355 -- # echo 2 00:05:16.207 03:16:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.207 03:16:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.207 03:16:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.207 03:16:29 env -- scripts/common.sh@368 -- # return 0 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.207 --rc genhtml_branch_coverage=1 00:05:16.207 --rc genhtml_function_coverage=1 00:05:16.207 --rc genhtml_legend=1 00:05:16.207 --rc geninfo_all_blocks=1 00:05:16.207 --rc geninfo_unexecuted_blocks=1 00:05:16.207 00:05:16.207 ' 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.207 --rc genhtml_branch_coverage=1 00:05:16.207 --rc genhtml_function_coverage=1 00:05:16.207 --rc genhtml_legend=1 00:05:16.207 --rc geninfo_all_blocks=1 00:05:16.207 --rc geninfo_unexecuted_blocks=1 00:05:16.207 00:05:16.207 ' 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.207 --rc genhtml_branch_coverage=1 00:05:16.207 --rc genhtml_function_coverage=1 00:05:16.207 --rc genhtml_legend=1 00:05:16.207 --rc geninfo_all_blocks=1 00:05:16.207 --rc geninfo_unexecuted_blocks=1 00:05:16.207 00:05:16.207 ' 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.207 --rc genhtml_branch_coverage=1 00:05:16.207 --rc genhtml_function_coverage=1 00:05:16.207 --rc genhtml_legend=1 00:05:16.207 --rc geninfo_all_blocks=1 00:05:16.207 --rc geninfo_unexecuted_blocks=1 00:05:16.207 00:05:16.207 ' 00:05:16.207 03:16:29 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.207 03:16:29 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.207 03:16:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.207 ************************************ 00:05:16.207 START TEST env_memory 00:05:16.207 ************************************ 00:05:16.207 03:16:29 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:16.207 00:05:16.207 00:05:16.207 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.207 http://cunit.sourceforge.net/ 00:05:16.207 00:05:16.207 00:05:16.207 Suite: memory 00:05:16.207 Test: alloc and free memory map ...[2024-11-05 03:16:29.675620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:16.207 passed 00:05:16.207 Test: mem map translation ...[2024-11-05 03:16:29.720139] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:16.207 [2024-11-05 03:16:29.720210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:16.207 [2024-11-05 03:16:29.720317] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:16.207 [2024-11-05 03:16:29.720355] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:16.207 passed 00:05:16.207 Test: mem map registration ...[2024-11-05 03:16:29.791042] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:16.207 [2024-11-05 03:16:29.791106] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:16.207 passed 00:05:16.493 Test: mem map adjacent registrations ...passed 00:05:16.493 00:05:16.493 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.493 suites 1 1 n/a 0 0 00:05:16.493 tests 4 4 4 0 0 00:05:16.493 asserts 152 152 152 0 n/a 00:05:16.493 00:05:16.493 Elapsed time = 0.248 seconds 00:05:16.493 00:05:16.493 real 0m0.287s 00:05:16.493 user 0m0.254s 00:05:16.493 sys 0m0.025s 00:05:16.493 03:16:29 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.493 ************************************ 00:05:16.493 END TEST env_memory 00:05:16.493 03:16:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:16.493 ************************************ 00:05:16.493 03:16:29 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:16.493 03:16:29 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.493 03:16:29 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.493 03:16:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.493 ************************************ 00:05:16.493 START TEST env_vtophys 00:05:16.493 ************************************ 00:05:16.493 03:16:29 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:16.493 EAL: lib.eal log level changed from notice to debug 00:05:16.493 EAL: Detected lcore 0 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 1 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 2 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 3 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 4 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 5 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 6 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 7 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 8 as core 0 on socket 0 00:05:16.493 EAL: Detected lcore 9 as core 0 on socket 0 00:05:16.493 EAL: Maximum logical cores by configuration: 128 00:05:16.493 EAL: Detected CPU lcores: 10 00:05:16.493 EAL: Detected NUMA nodes: 1 00:05:16.493 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:16.493 EAL: Detected shared linkage of DPDK 00:05:16.493 EAL: No shared files mode enabled, IPC will be disabled 00:05:16.493 EAL: Selected IOVA mode 'PA' 00:05:16.493 EAL: Probing VFIO support... 00:05:16.493 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.493 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:16.493 EAL: Ask a virtual area of 0x2e000 bytes 00:05:16.493 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:16.493 EAL: Setting up physically contiguous memory... 00:05:16.493 EAL: Setting maximum number of open files to 524288 00:05:16.493 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:16.493 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:16.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.493 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:16.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.493 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:16.493 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:16.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.493 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:16.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.493 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:16.493 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:16.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.493 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:16.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.493 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:16.493 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:16.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.493 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:16.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.493 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:16.493 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:16.493 EAL: Hugepages will be freed exactly as allocated. 00:05:16.493 EAL: No shared files mode enabled, IPC is disabled 00:05:16.493 EAL: No shared files mode enabled, IPC is disabled 00:05:16.755 EAL: TSC frequency is ~2200000 KHz 00:05:16.755 EAL: Main lcore 0 is ready (tid=7f43fe9d2a40;cpuset=[0]) 00:05:16.755 EAL: Trying to obtain current memory policy. 00:05:16.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.755 EAL: Restoring previous memory policy: 0 00:05:16.755 EAL: request: mp_malloc_sync 00:05:16.755 EAL: No shared files mode enabled, IPC is disabled 00:05:16.755 EAL: Heap on socket 0 was expanded by 2MB 00:05:16.755 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.755 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:16.755 EAL: Mem event callback 'spdk:(nil)' registered 00:05:16.755 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:16.755 00:05:16.755 00:05:16.755 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.755 http://cunit.sourceforge.net/ 00:05:16.755 00:05:16.755 00:05:16.755 Suite: components_suite 00:05:17.015 Test: vtophys_malloc_test ...passed 00:05:17.015 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:17.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.015 EAL: Restoring previous memory policy: 4 00:05:17.015 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.015 EAL: request: mp_malloc_sync 00:05:17.015 EAL: No shared files mode enabled, IPC is disabled 00:05:17.015 EAL: Heap on socket 0 was expanded by 4MB 00:05:17.015 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.015 EAL: request: mp_malloc_sync 00:05:17.015 EAL: No shared files mode enabled, IPC is disabled 00:05:17.015 EAL: Heap on socket 0 was shrunk by 4MB 00:05:17.015 EAL: Trying to obtain current memory policy. 00:05:17.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.275 EAL: Restoring previous memory policy: 4 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was expanded by 6MB 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was shrunk by 6MB 00:05:17.275 EAL: Trying to obtain current memory policy. 00:05:17.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.275 EAL: Restoring previous memory policy: 4 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was expanded by 10MB 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was shrunk by 10MB 00:05:17.275 EAL: Trying to obtain current memory policy. 00:05:17.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.275 EAL: Restoring previous memory policy: 4 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was expanded by 18MB 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was shrunk by 18MB 00:05:17.275 EAL: Trying to obtain current memory policy. 00:05:17.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.275 EAL: Restoring previous memory policy: 4 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was expanded by 34MB 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was shrunk by 34MB 00:05:17.275 EAL: Trying to obtain current memory policy. 00:05:17.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.275 EAL: Restoring previous memory policy: 4 00:05:17.275 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.275 EAL: request: mp_malloc_sync 00:05:17.275 EAL: No shared files mode enabled, IPC is disabled 00:05:17.275 EAL: Heap on socket 0 was expanded by 66MB 00:05:17.534 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.534 EAL: request: mp_malloc_sync 00:05:17.534 EAL: No shared files mode enabled, IPC is disabled 00:05:17.534 EAL: Heap on socket 0 was shrunk by 66MB 00:05:17.534 EAL: Trying to obtain current memory policy. 00:05:17.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.535 EAL: Restoring previous memory policy: 4 00:05:17.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.535 EAL: request: mp_malloc_sync 00:05:17.535 EAL: No shared files mode enabled, IPC is disabled 00:05:17.535 EAL: Heap on socket 0 was expanded by 130MB 00:05:17.793 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.793 EAL: request: mp_malloc_sync 00:05:17.793 EAL: No shared files mode enabled, IPC is disabled 00:05:17.793 EAL: Heap on socket 0 was shrunk by 130MB 00:05:17.793 EAL: Trying to obtain current memory policy. 00:05:17.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.052 EAL: Restoring previous memory policy: 4 00:05:18.052 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.052 EAL: request: mp_malloc_sync 00:05:18.052 EAL: No shared files mode enabled, IPC is disabled 00:05:18.052 EAL: Heap on socket 0 was expanded by 258MB 00:05:18.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.311 EAL: request: mp_malloc_sync 00:05:18.311 EAL: No shared files mode enabled, IPC is disabled 00:05:18.311 EAL: Heap on socket 0 was shrunk by 258MB 00:05:18.879 EAL: Trying to obtain current memory policy. 00:05:18.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.879 EAL: Restoring previous memory policy: 4 00:05:18.879 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.879 EAL: request: mp_malloc_sync 00:05:18.879 EAL: No shared files mode enabled, IPC is disabled 00:05:18.879 EAL: Heap on socket 0 was expanded by 514MB 00:05:19.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.816 EAL: request: mp_malloc_sync 00:05:19.816 EAL: No shared files mode enabled, IPC is disabled 00:05:19.816 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.411 EAL: Trying to obtain current memory policy. 00:05:20.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.670 EAL: Restoring previous memory policy: 4 00:05:20.670 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.670 EAL: request: mp_malloc_sync 00:05:20.670 EAL: No shared files mode enabled, IPC is disabled 00:05:20.670 EAL: Heap on socket 0 was expanded by 1026MB 00:05:22.046 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.306 EAL: request: mp_malloc_sync 00:05:22.306 EAL: No shared files mode enabled, IPC is disabled 00:05:22.306 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.683 passed 00:05:23.683 00:05:23.683 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.683 suites 1 1 n/a 0 0 00:05:23.683 tests 2 2 2 0 0 00:05:23.683 asserts 5761 5761 5761 0 n/a 00:05:23.683 00:05:23.683 Elapsed time = 6.852 seconds 00:05:23.683 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.683 EAL: request: mp_malloc_sync 00:05:23.683 EAL: No shared files mode enabled, IPC is disabled 00:05:23.683 EAL: Heap on socket 0 was shrunk by 2MB 00:05:23.683 EAL: No shared files mode enabled, IPC is disabled 00:05:23.683 EAL: No shared files mode enabled, IPC is disabled 00:05:23.683 EAL: No shared files mode enabled, IPC is disabled 00:05:23.683 ************************************ 00:05:23.683 END TEST env_vtophys 00:05:23.683 ************************************ 00:05:23.683 00:05:23.683 real 0m7.201s 00:05:23.683 user 0m6.026s 00:05:23.683 sys 0m1.012s 00:05:23.683 03:16:37 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.683 03:16:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:23.683 03:16:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.683 03:16:37 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.683 03:16:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.683 03:16:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.683 ************************************ 00:05:23.683 START TEST env_pci 00:05:23.683 ************************************ 00:05:23.683 03:16:37 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.683 00:05:23.683 00:05:23.683 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.683 http://cunit.sourceforge.net/ 00:05:23.683 00:05:23.683 00:05:23.683 Suite: pci 00:05:23.683 Test: pci_hook ...[2024-11-05 03:16:37.244454] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56589 has claimed it 00:05:23.683 passed 00:05:23.683 00:05:23.683 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.683 suites 1 1 n/a 0 0 00:05:23.683 tests 1 1 1 0 0 00:05:23.683 asserts 25 25 25 0 n/a 00:05:23.683 00:05:23.683 Elapsed time = 0.008 seconds 00:05:23.683 EAL: Cannot find device (10000:00:01.0) 00:05:23.683 EAL: Failed to attach device on primary process 00:05:23.683 00:05:23.683 real 0m0.080s 00:05:23.683 user 0m0.034s 00:05:23.683 sys 0m0.044s 00:05:23.683 03:16:37 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.683 03:16:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:23.683 ************************************ 00:05:23.683 END TEST env_pci 00:05:23.683 ************************************ 00:05:23.943 03:16:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.943 03:16:37 env -- env/env.sh@15 -- # uname 00:05:23.943 03:16:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.943 03:16:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.943 03:16:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.943 03:16:37 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:23.943 03:16:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.943 03:16:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.943 ************************************ 00:05:23.943 START TEST env_dpdk_post_init 00:05:23.943 ************************************ 00:05:23.943 03:16:37 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.943 EAL: Detected CPU lcores: 10 00:05:23.943 EAL: Detected NUMA nodes: 1 00:05:23.943 EAL: Detected shared linkage of DPDK 00:05:23.943 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.943 EAL: Selected IOVA mode 'PA' 00:05:23.943 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.202 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:24.202 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:24.202 Starting DPDK initialization... 00:05:24.202 Starting SPDK post initialization... 00:05:24.202 SPDK NVMe probe 00:05:24.202 Attaching to 0000:00:10.0 00:05:24.202 Attaching to 0000:00:11.0 00:05:24.202 Attached to 0000:00:10.0 00:05:24.202 Attached to 0000:00:11.0 00:05:24.202 Cleaning up... 00:05:24.202 ************************************ 00:05:24.202 END TEST env_dpdk_post_init 00:05:24.202 ************************************ 00:05:24.202 00:05:24.202 real 0m0.296s 00:05:24.202 user 0m0.099s 00:05:24.202 sys 0m0.096s 00:05:24.202 03:16:37 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.202 03:16:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.202 03:16:37 env -- env/env.sh@26 -- # uname 00:05:24.202 03:16:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.202 03:16:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.202 03:16:37 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.202 03:16:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.202 03:16:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.202 ************************************ 00:05:24.202 START TEST env_mem_callbacks 00:05:24.202 ************************************ 00:05:24.202 03:16:37 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.202 EAL: Detected CPU lcores: 10 00:05:24.202 EAL: Detected NUMA nodes: 1 00:05:24.202 EAL: Detected shared linkage of DPDK 00:05:24.202 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.202 EAL: Selected IOVA mode 'PA' 00:05:24.460 00:05:24.460 00:05:24.460 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.460 http://cunit.sourceforge.net/ 00:05:24.460 00:05:24.460 00:05:24.460 Suite: memory 00:05:24.460 Test: test ... 00:05:24.460 register 0x200000200000 2097152 00:05:24.460 malloc 3145728 00:05:24.460 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.460 register 0x200000400000 4194304 00:05:24.460 buf 0x2000004fffc0 len 3145728 PASSED 00:05:24.460 malloc 64 00:05:24.460 buf 0x2000004ffec0 len 64 PASSED 00:05:24.460 malloc 4194304 00:05:24.460 register 0x200000800000 6291456 00:05:24.460 buf 0x2000009fffc0 len 4194304 PASSED 00:05:24.460 free 0x2000004fffc0 3145728 00:05:24.460 free 0x2000004ffec0 64 00:05:24.460 unregister 0x200000400000 4194304 PASSED 00:05:24.460 free 0x2000009fffc0 4194304 00:05:24.460 unregister 0x200000800000 6291456 PASSED 00:05:24.460 malloc 8388608 00:05:24.460 register 0x200000400000 10485760 00:05:24.460 buf 0x2000005fffc0 len 8388608 PASSED 00:05:24.460 free 0x2000005fffc0 8388608 00:05:24.460 unregister 0x200000400000 10485760 PASSED 00:05:24.460 passed 00:05:24.460 00:05:24.460 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.460 suites 1 1 n/a 0 0 00:05:24.460 tests 1 1 1 0 0 00:05:24.460 asserts 15 15 15 0 n/a 00:05:24.460 00:05:24.460 Elapsed time = 0.057 seconds 00:05:24.460 00:05:24.460 real 0m0.261s 00:05:24.460 user 0m0.086s 00:05:24.460 sys 0m0.072s 00:05:24.460 03:16:37 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.460 03:16:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:24.460 ************************************ 00:05:24.460 END TEST env_mem_callbacks 00:05:24.460 ************************************ 00:05:24.460 00:05:24.460 real 0m8.588s 00:05:24.460 user 0m6.701s 00:05:24.460 sys 0m1.491s 00:05:24.460 ************************************ 00:05:24.460 END TEST env 00:05:24.460 ************************************ 00:05:24.460 03:16:38 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.460 03:16:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.460 03:16:38 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.460 03:16:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.461 03:16:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.461 03:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.461 ************************************ 00:05:24.461 START TEST rpc 00:05:24.461 ************************************ 00:05:24.461 03:16:38 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.719 * Looking for test storage... 00:05:24.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.719 03:16:38 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:24.719 03:16:38 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:24.719 03:16:38 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:24.719 03:16:38 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:24.719 03:16:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.719 03:16:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.719 03:16:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.719 03:16:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.720 03:16:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.720 03:16:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.720 03:16:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.720 03:16:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.720 03:16:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.720 03:16:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.720 03:16:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.720 03:16:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:24.720 03:16:38 rpc -- scripts/common.sh@345 -- # : 1 00:05:24.720 03:16:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.720 03:16:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.720 03:16:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.720 03:16:38 rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.720 03:16:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.720 03:16:38 rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.720 03:16:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.720 03:16:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.720 03:16:38 rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.720 03:16:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.720 03:16:38 rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.720 03:16:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.720 03:16:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.720 03:16:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.720 03:16:38 rpc -- scripts/common.sh@368 -- # return 0 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.720 --rc genhtml_branch_coverage=1 00:05:24.720 --rc genhtml_function_coverage=1 00:05:24.720 --rc genhtml_legend=1 00:05:24.720 --rc geninfo_all_blocks=1 00:05:24.720 --rc geninfo_unexecuted_blocks=1 00:05:24.720 00:05:24.720 ' 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.720 --rc genhtml_branch_coverage=1 00:05:24.720 --rc genhtml_function_coverage=1 00:05:24.720 --rc genhtml_legend=1 00:05:24.720 --rc geninfo_all_blocks=1 00:05:24.720 --rc geninfo_unexecuted_blocks=1 00:05:24.720 00:05:24.720 ' 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.720 --rc genhtml_branch_coverage=1 00:05:24.720 --rc genhtml_function_coverage=1 00:05:24.720 --rc genhtml_legend=1 00:05:24.720 --rc geninfo_all_blocks=1 00:05:24.720 --rc geninfo_unexecuted_blocks=1 00:05:24.720 00:05:24.720 ' 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.720 --rc genhtml_branch_coverage=1 00:05:24.720 --rc genhtml_function_coverage=1 00:05:24.720 --rc genhtml_legend=1 00:05:24.720 --rc geninfo_all_blocks=1 00:05:24.720 --rc geninfo_unexecuted_blocks=1 00:05:24.720 00:05:24.720 ' 00:05:24.720 03:16:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56716 00:05:24.720 03:16:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.720 03:16:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56716 00:05:24.720 03:16:38 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@833 -- # '[' -z 56716 ']' 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.720 03:16:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.979 [2024-11-05 03:16:38.371608] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:24.979 [2024-11-05 03:16:38.372038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56716 ] 00:05:24.979 [2024-11-05 03:16:38.565861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.237 [2024-11-05 03:16:38.717146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.237 [2024-11-05 03:16:38.717527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56716' to capture a snapshot of events at runtime. 00:05:25.237 [2024-11-05 03:16:38.717790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.237 [2024-11-05 03:16:38.718056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.237 [2024-11-05 03:16:38.718259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56716 for offline analysis/debug. 00:05:25.237 [2024-11-05 03:16:38.720006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.204 03:16:39 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.204 03:16:39 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:26.204 03:16:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.204 03:16:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.204 03:16:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.204 03:16:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.204 03:16:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.204 03:16:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.204 03:16:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.204 ************************************ 00:05:26.204 START TEST rpc_integrity 00:05:26.204 ************************************ 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.204 { 00:05:26.204 "name": "Malloc0", 00:05:26.204 "aliases": [ 00:05:26.204 "349639dc-fd04-4bff-b15f-1342002a7458" 00:05:26.204 ], 00:05:26.204 "product_name": "Malloc disk", 00:05:26.204 "block_size": 512, 00:05:26.204 "num_blocks": 16384, 00:05:26.204 "uuid": "349639dc-fd04-4bff-b15f-1342002a7458", 00:05:26.204 "assigned_rate_limits": { 00:05:26.204 "rw_ios_per_sec": 0, 00:05:26.204 "rw_mbytes_per_sec": 0, 00:05:26.204 "r_mbytes_per_sec": 0, 00:05:26.204 "w_mbytes_per_sec": 0 00:05:26.204 }, 00:05:26.204 "claimed": false, 00:05:26.204 "zoned": false, 00:05:26.204 "supported_io_types": { 00:05:26.204 "read": true, 00:05:26.204 "write": true, 00:05:26.204 "unmap": true, 00:05:26.204 "flush": true, 00:05:26.204 "reset": true, 00:05:26.204 "nvme_admin": false, 00:05:26.204 "nvme_io": false, 00:05:26.204 "nvme_io_md": false, 00:05:26.204 "write_zeroes": true, 00:05:26.204 "zcopy": true, 00:05:26.204 "get_zone_info": false, 00:05:26.204 "zone_management": false, 00:05:26.204 "zone_append": false, 00:05:26.204 "compare": false, 00:05:26.204 "compare_and_write": false, 00:05:26.204 "abort": true, 00:05:26.204 "seek_hole": false, 00:05:26.204 "seek_data": false, 00:05:26.204 "copy": true, 00:05:26.204 "nvme_iov_md": false 00:05:26.204 }, 00:05:26.204 "memory_domains": [ 00:05:26.204 { 00:05:26.204 "dma_device_id": "system", 00:05:26.204 "dma_device_type": 1 00:05:26.204 }, 00:05:26.204 { 00:05:26.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.204 "dma_device_type": 2 00:05:26.204 } 00:05:26.204 ], 00:05:26.204 "driver_specific": {} 00:05:26.204 } 00:05:26.204 ]' 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.204 [2024-11-05 03:16:39.732914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.204 [2024-11-05 03:16:39.732998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.204 [2024-11-05 03:16:39.733027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:26.204 [2024-11-05 03:16:39.733049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.204 [2024-11-05 03:16:39.736107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.204 [2024-11-05 03:16:39.736174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.204 Passthru0 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.204 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.204 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.204 { 00:05:26.204 "name": "Malloc0", 00:05:26.204 "aliases": [ 00:05:26.204 "349639dc-fd04-4bff-b15f-1342002a7458" 00:05:26.204 ], 00:05:26.204 "product_name": "Malloc disk", 00:05:26.204 "block_size": 512, 00:05:26.204 "num_blocks": 16384, 00:05:26.204 "uuid": "349639dc-fd04-4bff-b15f-1342002a7458", 00:05:26.204 "assigned_rate_limits": { 00:05:26.204 "rw_ios_per_sec": 0, 00:05:26.204 "rw_mbytes_per_sec": 0, 00:05:26.204 "r_mbytes_per_sec": 0, 00:05:26.204 "w_mbytes_per_sec": 0 00:05:26.204 }, 00:05:26.204 "claimed": true, 00:05:26.204 "claim_type": "exclusive_write", 00:05:26.204 "zoned": false, 00:05:26.204 "supported_io_types": { 00:05:26.204 "read": true, 00:05:26.204 "write": true, 00:05:26.204 "unmap": true, 00:05:26.204 "flush": true, 00:05:26.204 "reset": true, 00:05:26.204 "nvme_admin": false, 00:05:26.204 "nvme_io": false, 00:05:26.204 "nvme_io_md": false, 00:05:26.204 "write_zeroes": true, 00:05:26.204 "zcopy": true, 00:05:26.204 "get_zone_info": false, 00:05:26.204 "zone_management": false, 00:05:26.204 "zone_append": false, 00:05:26.204 "compare": false, 00:05:26.204 "compare_and_write": false, 00:05:26.204 "abort": true, 00:05:26.204 "seek_hole": false, 00:05:26.204 "seek_data": false, 00:05:26.204 "copy": true, 00:05:26.204 "nvme_iov_md": false 00:05:26.204 }, 00:05:26.204 "memory_domains": [ 00:05:26.204 { 00:05:26.204 "dma_device_id": "system", 00:05:26.204 "dma_device_type": 1 00:05:26.204 }, 00:05:26.204 { 00:05:26.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.204 "dma_device_type": 2 00:05:26.204 } 00:05:26.204 ], 00:05:26.204 "driver_specific": {} 00:05:26.204 }, 00:05:26.204 { 00:05:26.204 "name": "Passthru0", 00:05:26.204 "aliases": [ 00:05:26.204 "8bb89ef8-92c0-5db6-9ca0-5df9f843df83" 00:05:26.204 ], 00:05:26.204 "product_name": "passthru", 00:05:26.204 "block_size": 512, 00:05:26.204 "num_blocks": 16384, 00:05:26.204 "uuid": "8bb89ef8-92c0-5db6-9ca0-5df9f843df83", 00:05:26.204 "assigned_rate_limits": { 00:05:26.204 "rw_ios_per_sec": 0, 00:05:26.204 "rw_mbytes_per_sec": 0, 00:05:26.204 "r_mbytes_per_sec": 0, 00:05:26.204 "w_mbytes_per_sec": 0 00:05:26.204 }, 00:05:26.204 "claimed": false, 00:05:26.204 "zoned": false, 00:05:26.204 "supported_io_types": { 00:05:26.204 "read": true, 00:05:26.204 "write": true, 00:05:26.204 "unmap": true, 00:05:26.204 "flush": true, 00:05:26.204 "reset": true, 00:05:26.205 "nvme_admin": false, 00:05:26.205 "nvme_io": false, 00:05:26.205 "nvme_io_md": false, 00:05:26.205 "write_zeroes": true, 00:05:26.205 "zcopy": true, 00:05:26.205 "get_zone_info": false, 00:05:26.205 "zone_management": false, 00:05:26.205 "zone_append": false, 00:05:26.205 "compare": false, 00:05:26.205 "compare_and_write": false, 00:05:26.205 "abort": true, 00:05:26.205 "seek_hole": false, 00:05:26.205 "seek_data": false, 00:05:26.205 "copy": true, 00:05:26.205 "nvme_iov_md": false 00:05:26.205 }, 00:05:26.205 "memory_domains": [ 00:05:26.205 { 00:05:26.205 "dma_device_id": "system", 00:05:26.205 "dma_device_type": 1 00:05:26.205 }, 00:05:26.205 { 00:05:26.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.205 "dma_device_type": 2 00:05:26.205 } 00:05:26.205 ], 00:05:26.205 "driver_specific": { 00:05:26.205 "passthru": { 00:05:26.205 "name": "Passthru0", 00:05:26.205 "base_bdev_name": "Malloc0" 00:05:26.205 } 00:05:26.205 } 00:05:26.205 } 00:05:26.205 ]' 00:05:26.205 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.205 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.205 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.205 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.205 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.205 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.205 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.205 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.205 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.465 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.465 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.465 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.465 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.465 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.465 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.465 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.465 ************************************ 00:05:26.465 END TEST rpc_integrity 00:05:26.465 ************************************ 00:05:26.465 03:16:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.465 00:05:26.465 real 0m0.357s 00:05:26.465 user 0m0.218s 00:05:26.465 sys 0m0.039s 00:05:26.465 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.465 03:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.465 03:16:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.465 03:16:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.465 03:16:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.465 03:16:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.465 ************************************ 00:05:26.465 START TEST rpc_plugins 00:05:26.465 ************************************ 00:05:26.465 03:16:39 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:26.465 03:16:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.465 03:16:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.465 03:16:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.465 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.465 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.465 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.465 { 00:05:26.465 "name": "Malloc1", 00:05:26.465 "aliases": [ 00:05:26.465 "c0d8f266-9aab-4a69-9433-616f5dcffe8e" 00:05:26.465 ], 00:05:26.465 "product_name": "Malloc disk", 00:05:26.465 "block_size": 4096, 00:05:26.465 "num_blocks": 256, 00:05:26.465 "uuid": "c0d8f266-9aab-4a69-9433-616f5dcffe8e", 00:05:26.465 "assigned_rate_limits": { 00:05:26.465 "rw_ios_per_sec": 0, 00:05:26.465 "rw_mbytes_per_sec": 0, 00:05:26.465 "r_mbytes_per_sec": 0, 00:05:26.465 "w_mbytes_per_sec": 0 00:05:26.465 }, 00:05:26.465 "claimed": false, 00:05:26.465 "zoned": false, 00:05:26.465 "supported_io_types": { 00:05:26.465 "read": true, 00:05:26.465 "write": true, 00:05:26.465 "unmap": true, 00:05:26.465 "flush": true, 00:05:26.465 "reset": true, 00:05:26.465 "nvme_admin": false, 00:05:26.465 "nvme_io": false, 00:05:26.465 "nvme_io_md": false, 00:05:26.465 "write_zeroes": true, 00:05:26.465 "zcopy": true, 00:05:26.465 "get_zone_info": false, 00:05:26.465 "zone_management": false, 00:05:26.465 "zone_append": false, 00:05:26.465 "compare": false, 00:05:26.465 "compare_and_write": false, 00:05:26.465 "abort": true, 00:05:26.465 "seek_hole": false, 00:05:26.465 "seek_data": false, 00:05:26.465 "copy": true, 00:05:26.465 "nvme_iov_md": false 00:05:26.465 }, 00:05:26.465 "memory_domains": [ 00:05:26.465 { 00:05:26.465 "dma_device_id": "system", 00:05:26.465 "dma_device_type": 1 00:05:26.465 }, 00:05:26.465 { 00:05:26.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.465 "dma_device_type": 2 00:05:26.465 } 00:05:26.465 ], 00:05:26.465 "driver_specific": {} 00:05:26.465 } 00:05:26.465 ]' 00:05:26.465 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.465 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.465 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.465 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.465 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.466 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.466 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.466 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.724 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.724 03:16:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.724 ************************************ 00:05:26.724 END TEST rpc_plugins 00:05:26.724 ************************************ 00:05:26.724 00:05:26.724 real 0m0.168s 00:05:26.724 user 0m0.109s 00:05:26.724 sys 0m0.016s 00:05:26.724 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.724 03:16:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.724 03:16:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.724 03:16:40 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.724 03:16:40 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.724 03:16:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.724 ************************************ 00:05:26.724 START TEST rpc_trace_cmd_test 00:05:26.724 ************************************ 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.724 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56716", 00:05:26.724 "tpoint_group_mask": "0x8", 00:05:26.724 "iscsi_conn": { 00:05:26.724 "mask": "0x2", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "scsi": { 00:05:26.724 "mask": "0x4", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "bdev": { 00:05:26.724 "mask": "0x8", 00:05:26.724 "tpoint_mask": "0xffffffffffffffff" 00:05:26.724 }, 00:05:26.724 "nvmf_rdma": { 00:05:26.724 "mask": "0x10", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "nvmf_tcp": { 00:05:26.724 "mask": "0x20", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "ftl": { 00:05:26.724 "mask": "0x40", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "blobfs": { 00:05:26.724 "mask": "0x80", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "dsa": { 00:05:26.724 "mask": "0x200", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "thread": { 00:05:26.724 "mask": "0x400", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "nvme_pcie": { 00:05:26.724 "mask": "0x800", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "iaa": { 00:05:26.724 "mask": "0x1000", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "nvme_tcp": { 00:05:26.724 "mask": "0x2000", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "bdev_nvme": { 00:05:26.724 "mask": "0x4000", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "sock": { 00:05:26.724 "mask": "0x8000", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "blob": { 00:05:26.724 "mask": "0x10000", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "bdev_raid": { 00:05:26.724 "mask": "0x20000", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 }, 00:05:26.724 "scheduler": { 00:05:26.724 "mask": "0x40000", 00:05:26.724 "tpoint_mask": "0x0" 00:05:26.724 } 00:05:26.724 }' 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.724 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.983 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.983 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.983 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.983 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.983 ************************************ 00:05:26.983 END TEST rpc_trace_cmd_test 00:05:26.983 ************************************ 00:05:26.983 03:16:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.983 00:05:26.983 real 0m0.273s 00:05:26.983 user 0m0.234s 00:05:26.983 sys 0m0.028s 00:05:26.983 03:16:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.983 03:16:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.983 03:16:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.983 03:16:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.983 03:16:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.983 03:16:40 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.983 03:16:40 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.983 03:16:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.983 ************************************ 00:05:26.983 START TEST rpc_daemon_integrity 00:05:26.983 ************************************ 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.983 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.242 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.242 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.242 { 00:05:27.242 "name": "Malloc2", 00:05:27.242 "aliases": [ 00:05:27.242 "98b9e92a-41a3-45dc-bc95-126a5de73210" 00:05:27.242 ], 00:05:27.242 "product_name": "Malloc disk", 00:05:27.242 "block_size": 512, 00:05:27.242 "num_blocks": 16384, 00:05:27.242 "uuid": "98b9e92a-41a3-45dc-bc95-126a5de73210", 00:05:27.242 "assigned_rate_limits": { 00:05:27.242 "rw_ios_per_sec": 0, 00:05:27.242 "rw_mbytes_per_sec": 0, 00:05:27.242 "r_mbytes_per_sec": 0, 00:05:27.242 "w_mbytes_per_sec": 0 00:05:27.242 }, 00:05:27.242 "claimed": false, 00:05:27.242 "zoned": false, 00:05:27.242 "supported_io_types": { 00:05:27.242 "read": true, 00:05:27.242 "write": true, 00:05:27.242 "unmap": true, 00:05:27.242 "flush": true, 00:05:27.242 "reset": true, 00:05:27.242 "nvme_admin": false, 00:05:27.242 "nvme_io": false, 00:05:27.242 "nvme_io_md": false, 00:05:27.242 "write_zeroes": true, 00:05:27.242 "zcopy": true, 00:05:27.242 "get_zone_info": false, 00:05:27.242 "zone_management": false, 00:05:27.242 "zone_append": false, 00:05:27.242 "compare": false, 00:05:27.242 "compare_and_write": false, 00:05:27.242 "abort": true, 00:05:27.242 "seek_hole": false, 00:05:27.243 "seek_data": false, 00:05:27.243 "copy": true, 00:05:27.243 "nvme_iov_md": false 00:05:27.243 }, 00:05:27.243 "memory_domains": [ 00:05:27.243 { 00:05:27.243 "dma_device_id": "system", 00:05:27.243 "dma_device_type": 1 00:05:27.243 }, 00:05:27.243 { 00:05:27.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.243 "dma_device_type": 2 00:05:27.243 } 00:05:27.243 ], 00:05:27.243 "driver_specific": {} 00:05:27.243 } 00:05:27.243 ]' 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.243 [2024-11-05 03:16:40.686622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.243 [2024-11-05 03:16:40.686715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.243 [2024-11-05 03:16:40.686743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:27.243 [2024-11-05 03:16:40.686759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.243 [2024-11-05 03:16:40.689570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.243 [2024-11-05 03:16:40.689620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.243 Passthru0 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.243 { 00:05:27.243 "name": "Malloc2", 00:05:27.243 "aliases": [ 00:05:27.243 "98b9e92a-41a3-45dc-bc95-126a5de73210" 00:05:27.243 ], 00:05:27.243 "product_name": "Malloc disk", 00:05:27.243 "block_size": 512, 00:05:27.243 "num_blocks": 16384, 00:05:27.243 "uuid": "98b9e92a-41a3-45dc-bc95-126a5de73210", 00:05:27.243 "assigned_rate_limits": { 00:05:27.243 "rw_ios_per_sec": 0, 00:05:27.243 "rw_mbytes_per_sec": 0, 00:05:27.243 "r_mbytes_per_sec": 0, 00:05:27.243 "w_mbytes_per_sec": 0 00:05:27.243 }, 00:05:27.243 "claimed": true, 00:05:27.243 "claim_type": "exclusive_write", 00:05:27.243 "zoned": false, 00:05:27.243 "supported_io_types": { 00:05:27.243 "read": true, 00:05:27.243 "write": true, 00:05:27.243 "unmap": true, 00:05:27.243 "flush": true, 00:05:27.243 "reset": true, 00:05:27.243 "nvme_admin": false, 00:05:27.243 "nvme_io": false, 00:05:27.243 "nvme_io_md": false, 00:05:27.243 "write_zeroes": true, 00:05:27.243 "zcopy": true, 00:05:27.243 "get_zone_info": false, 00:05:27.243 "zone_management": false, 00:05:27.243 "zone_append": false, 00:05:27.243 "compare": false, 00:05:27.243 "compare_and_write": false, 00:05:27.243 "abort": true, 00:05:27.243 "seek_hole": false, 00:05:27.243 "seek_data": false, 00:05:27.243 "copy": true, 00:05:27.243 "nvme_iov_md": false 00:05:27.243 }, 00:05:27.243 "memory_domains": [ 00:05:27.243 { 00:05:27.243 "dma_device_id": "system", 00:05:27.243 "dma_device_type": 1 00:05:27.243 }, 00:05:27.243 { 00:05:27.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.243 "dma_device_type": 2 00:05:27.243 } 00:05:27.243 ], 00:05:27.243 "driver_specific": {} 00:05:27.243 }, 00:05:27.243 { 00:05:27.243 "name": "Passthru0", 00:05:27.243 "aliases": [ 00:05:27.243 "398bf2eb-b2bd-50a7-af00-2698882815b9" 00:05:27.243 ], 00:05:27.243 "product_name": "passthru", 00:05:27.243 "block_size": 512, 00:05:27.243 "num_blocks": 16384, 00:05:27.243 "uuid": "398bf2eb-b2bd-50a7-af00-2698882815b9", 00:05:27.243 "assigned_rate_limits": { 00:05:27.243 "rw_ios_per_sec": 0, 00:05:27.243 "rw_mbytes_per_sec": 0, 00:05:27.243 "r_mbytes_per_sec": 0, 00:05:27.243 "w_mbytes_per_sec": 0 00:05:27.243 }, 00:05:27.243 "claimed": false, 00:05:27.243 "zoned": false, 00:05:27.243 "supported_io_types": { 00:05:27.243 "read": true, 00:05:27.243 "write": true, 00:05:27.243 "unmap": true, 00:05:27.243 "flush": true, 00:05:27.243 "reset": true, 00:05:27.243 "nvme_admin": false, 00:05:27.243 "nvme_io": false, 00:05:27.243 "nvme_io_md": false, 00:05:27.243 "write_zeroes": true, 00:05:27.243 "zcopy": true, 00:05:27.243 "get_zone_info": false, 00:05:27.243 "zone_management": false, 00:05:27.243 "zone_append": false, 00:05:27.243 "compare": false, 00:05:27.243 "compare_and_write": false, 00:05:27.243 "abort": true, 00:05:27.243 "seek_hole": false, 00:05:27.243 "seek_data": false, 00:05:27.243 "copy": true, 00:05:27.243 "nvme_iov_md": false 00:05:27.243 }, 00:05:27.243 "memory_domains": [ 00:05:27.243 { 00:05:27.243 "dma_device_id": "system", 00:05:27.243 "dma_device_type": 1 00:05:27.243 }, 00:05:27.243 { 00:05:27.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.243 "dma_device_type": 2 00:05:27.243 } 00:05:27.243 ], 00:05:27.243 "driver_specific": { 00:05:27.243 "passthru": { 00:05:27.243 "name": "Passthru0", 00:05:27.243 "base_bdev_name": "Malloc2" 00:05:27.243 } 00:05:27.243 } 00:05:27.243 } 00:05:27.243 ]' 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.243 ************************************ 00:05:27.243 END TEST rpc_daemon_integrity 00:05:27.243 ************************************ 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.243 00:05:27.243 real 0m0.353s 00:05:27.243 user 0m0.224s 00:05:27.243 sys 0m0.038s 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.243 03:16:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.502 03:16:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.502 03:16:40 rpc -- rpc/rpc.sh@84 -- # killprocess 56716 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@952 -- # '[' -z 56716 ']' 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@956 -- # kill -0 56716 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@957 -- # uname 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56716 00:05:27.502 killing process with pid 56716 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56716' 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@971 -- # kill 56716 00:05:27.502 03:16:40 rpc -- common/autotest_common.sh@976 -- # wait 56716 00:05:29.408 00:05:29.408 real 0m4.930s 00:05:29.408 user 0m5.607s 00:05:29.408 sys 0m0.863s 00:05:29.408 03:16:42 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.408 ************************************ 00:05:29.408 END TEST rpc 00:05:29.408 ************************************ 00:05:29.408 03:16:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.408 03:16:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:29.408 03:16:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.408 03:16:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.408 03:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.408 ************************************ 00:05:29.408 START TEST skip_rpc 00:05:29.408 ************************************ 00:05:29.408 03:16:43 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:29.667 * Looking for test storage... 00:05:29.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.667 03:16:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.667 --rc genhtml_branch_coverage=1 00:05:29.667 --rc genhtml_function_coverage=1 00:05:29.667 --rc genhtml_legend=1 00:05:29.667 --rc geninfo_all_blocks=1 00:05:29.667 --rc geninfo_unexecuted_blocks=1 00:05:29.667 00:05:29.667 ' 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.667 --rc genhtml_branch_coverage=1 00:05:29.667 --rc genhtml_function_coverage=1 00:05:29.667 --rc genhtml_legend=1 00:05:29.667 --rc geninfo_all_blocks=1 00:05:29.667 --rc geninfo_unexecuted_blocks=1 00:05:29.667 00:05:29.667 ' 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.667 --rc genhtml_branch_coverage=1 00:05:29.667 --rc genhtml_function_coverage=1 00:05:29.667 --rc genhtml_legend=1 00:05:29.667 --rc geninfo_all_blocks=1 00:05:29.667 --rc geninfo_unexecuted_blocks=1 00:05:29.667 00:05:29.667 ' 00:05:29.667 03:16:43 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.667 --rc genhtml_branch_coverage=1 00:05:29.667 --rc genhtml_function_coverage=1 00:05:29.667 --rc genhtml_legend=1 00:05:29.667 --rc geninfo_all_blocks=1 00:05:29.667 --rc geninfo_unexecuted_blocks=1 00:05:29.668 00:05:29.668 ' 00:05:29.668 03:16:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.668 03:16:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:29.668 03:16:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:29.668 03:16:43 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.668 03:16:43 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.668 03:16:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.668 ************************************ 00:05:29.668 START TEST skip_rpc 00:05:29.668 ************************************ 00:05:29.668 03:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:29.668 03:16:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56945 00:05:29.668 03:16:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.668 03:16:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:29.668 03:16:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:29.926 [2024-11-05 03:16:43.360041] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:29.926 [2024-11-05 03:16:43.360529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56945 ] 00:05:29.927 [2024-11-05 03:16:43.544066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.186 [2024-11-05 03:16:43.657593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56945 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56945 ']' 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56945 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56945 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56945' 00:05:35.455 killing process with pid 56945 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56945 00:05:35.455 03:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56945 00:05:36.833 00:05:36.833 real 0m7.154s 00:05:36.833 user 0m6.613s 00:05:36.833 sys 0m0.442s 00:05:36.833 03:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.833 03:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.833 ************************************ 00:05:36.833 END TEST skip_rpc 00:05:36.833 ************************************ 00:05:36.833 03:16:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.833 03:16:50 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.833 03:16:50 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.833 03:16:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.834 ************************************ 00:05:36.834 START TEST skip_rpc_with_json 00:05:36.834 ************************************ 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57049 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57049 00:05:36.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57049 ']' 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.834 03:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.092 [2024-11-05 03:16:50.570332] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:37.092 [2024-11-05 03:16:50.570880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57049 ] 00:05:37.351 [2024-11-05 03:16:50.760031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.351 [2024-11-05 03:16:50.885854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.289 [2024-11-05 03:16:51.760959] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.289 request: 00:05:38.289 { 00:05:38.289 "trtype": "tcp", 00:05:38.289 "method": "nvmf_get_transports", 00:05:38.289 "req_id": 1 00:05:38.289 } 00:05:38.289 Got JSON-RPC error response 00:05:38.289 response: 00:05:38.289 { 00:05:38.289 "code": -19, 00:05:38.289 "message": "No such device" 00:05:38.289 } 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.289 [2024-11-05 03:16:51.773059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.289 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.548 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.548 03:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.548 { 00:05:38.548 "subsystems": [ 00:05:38.548 { 00:05:38.548 "subsystem": "fsdev", 00:05:38.548 "config": [ 00:05:38.548 { 00:05:38.548 "method": "fsdev_set_opts", 00:05:38.548 "params": { 00:05:38.548 "fsdev_io_pool_size": 65535, 00:05:38.548 "fsdev_io_cache_size": 256 00:05:38.548 } 00:05:38.548 } 00:05:38.548 ] 00:05:38.548 }, 00:05:38.548 { 00:05:38.548 "subsystem": "keyring", 00:05:38.548 "config": [] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "iobuf", 00:05:38.549 "config": [ 00:05:38.549 { 00:05:38.549 "method": "iobuf_set_options", 00:05:38.549 "params": { 00:05:38.549 "small_pool_count": 8192, 00:05:38.549 "large_pool_count": 1024, 00:05:38.549 "small_bufsize": 8192, 00:05:38.549 "large_bufsize": 135168, 00:05:38.549 "enable_numa": false 00:05:38.549 } 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "sock", 00:05:38.549 "config": [ 00:05:38.549 { 00:05:38.549 "method": "sock_set_default_impl", 00:05:38.549 "params": { 00:05:38.549 "impl_name": "posix" 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "sock_impl_set_options", 00:05:38.549 "params": { 00:05:38.549 "impl_name": "ssl", 00:05:38.549 "recv_buf_size": 4096, 00:05:38.549 "send_buf_size": 4096, 00:05:38.549 "enable_recv_pipe": true, 00:05:38.549 "enable_quickack": false, 00:05:38.549 "enable_placement_id": 0, 00:05:38.549 "enable_zerocopy_send_server": true, 00:05:38.549 "enable_zerocopy_send_client": false, 00:05:38.549 "zerocopy_threshold": 0, 00:05:38.549 "tls_version": 0, 00:05:38.549 "enable_ktls": false 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "sock_impl_set_options", 00:05:38.549 "params": { 00:05:38.549 "impl_name": "posix", 00:05:38.549 "recv_buf_size": 2097152, 00:05:38.549 "send_buf_size": 2097152, 00:05:38.549 "enable_recv_pipe": true, 00:05:38.549 "enable_quickack": false, 00:05:38.549 "enable_placement_id": 0, 00:05:38.549 "enable_zerocopy_send_server": true, 00:05:38.549 "enable_zerocopy_send_client": false, 00:05:38.549 "zerocopy_threshold": 0, 00:05:38.549 "tls_version": 0, 00:05:38.549 "enable_ktls": false 00:05:38.549 } 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "vmd", 00:05:38.549 "config": [] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "accel", 00:05:38.549 "config": [ 00:05:38.549 { 00:05:38.549 "method": "accel_set_options", 00:05:38.549 "params": { 00:05:38.549 "small_cache_size": 128, 00:05:38.549 "large_cache_size": 16, 00:05:38.549 "task_count": 2048, 00:05:38.549 "sequence_count": 2048, 00:05:38.549 "buf_count": 2048 00:05:38.549 } 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "bdev", 00:05:38.549 "config": [ 00:05:38.549 { 00:05:38.549 "method": "bdev_set_options", 00:05:38.549 "params": { 00:05:38.549 "bdev_io_pool_size": 65535, 00:05:38.549 "bdev_io_cache_size": 256, 00:05:38.549 "bdev_auto_examine": true, 00:05:38.549 "iobuf_small_cache_size": 128, 00:05:38.549 "iobuf_large_cache_size": 16 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "bdev_raid_set_options", 00:05:38.549 "params": { 00:05:38.549 "process_window_size_kb": 1024, 00:05:38.549 "process_max_bandwidth_mb_sec": 0 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "bdev_iscsi_set_options", 00:05:38.549 "params": { 00:05:38.549 "timeout_sec": 30 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "bdev_nvme_set_options", 00:05:38.549 "params": { 00:05:38.549 "action_on_timeout": "none", 00:05:38.549 "timeout_us": 0, 00:05:38.549 "timeout_admin_us": 0, 00:05:38.549 "keep_alive_timeout_ms": 10000, 00:05:38.549 "arbitration_burst": 0, 00:05:38.549 "low_priority_weight": 0, 00:05:38.549 "medium_priority_weight": 0, 00:05:38.549 "high_priority_weight": 0, 00:05:38.549 "nvme_adminq_poll_period_us": 10000, 00:05:38.549 "nvme_ioq_poll_period_us": 0, 00:05:38.549 "io_queue_requests": 0, 00:05:38.549 "delay_cmd_submit": true, 00:05:38.549 "transport_retry_count": 4, 00:05:38.549 "bdev_retry_count": 3, 00:05:38.549 "transport_ack_timeout": 0, 00:05:38.549 "ctrlr_loss_timeout_sec": 0, 00:05:38.549 "reconnect_delay_sec": 0, 00:05:38.549 "fast_io_fail_timeout_sec": 0, 00:05:38.549 "disable_auto_failback": false, 00:05:38.549 "generate_uuids": false, 00:05:38.549 "transport_tos": 0, 00:05:38.549 "nvme_error_stat": false, 00:05:38.549 "rdma_srq_size": 0, 00:05:38.549 "io_path_stat": false, 00:05:38.549 "allow_accel_sequence": false, 00:05:38.549 "rdma_max_cq_size": 0, 00:05:38.549 "rdma_cm_event_timeout_ms": 0, 00:05:38.549 "dhchap_digests": [ 00:05:38.549 "sha256", 00:05:38.549 "sha384", 00:05:38.549 "sha512" 00:05:38.549 ], 00:05:38.549 "dhchap_dhgroups": [ 00:05:38.549 "null", 00:05:38.549 "ffdhe2048", 00:05:38.549 "ffdhe3072", 00:05:38.549 "ffdhe4096", 00:05:38.549 "ffdhe6144", 00:05:38.549 "ffdhe8192" 00:05:38.549 ] 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "bdev_nvme_set_hotplug", 00:05:38.549 "params": { 00:05:38.549 "period_us": 100000, 00:05:38.549 "enable": false 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "bdev_wait_for_examine" 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "scsi", 00:05:38.549 "config": null 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "scheduler", 00:05:38.549 "config": [ 00:05:38.549 { 00:05:38.549 "method": "framework_set_scheduler", 00:05:38.549 "params": { 00:05:38.549 "name": "static" 00:05:38.549 } 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "vhost_scsi", 00:05:38.549 "config": [] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "vhost_blk", 00:05:38.549 "config": [] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "ublk", 00:05:38.549 "config": [] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "nbd", 00:05:38.549 "config": [] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "nvmf", 00:05:38.549 "config": [ 00:05:38.549 { 00:05:38.549 "method": "nvmf_set_config", 00:05:38.549 "params": { 00:05:38.549 "discovery_filter": "match_any", 00:05:38.549 "admin_cmd_passthru": { 00:05:38.549 "identify_ctrlr": false 00:05:38.549 }, 00:05:38.549 "dhchap_digests": [ 00:05:38.549 "sha256", 00:05:38.549 "sha384", 00:05:38.549 "sha512" 00:05:38.549 ], 00:05:38.549 "dhchap_dhgroups": [ 00:05:38.549 "null", 00:05:38.549 "ffdhe2048", 00:05:38.549 "ffdhe3072", 00:05:38.549 "ffdhe4096", 00:05:38.549 "ffdhe6144", 00:05:38.549 "ffdhe8192" 00:05:38.549 ] 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "nvmf_set_max_subsystems", 00:05:38.549 "params": { 00:05:38.549 "max_subsystems": 1024 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "nvmf_set_crdt", 00:05:38.549 "params": { 00:05:38.549 "crdt1": 0, 00:05:38.549 "crdt2": 0, 00:05:38.549 "crdt3": 0 00:05:38.549 } 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "method": "nvmf_create_transport", 00:05:38.549 "params": { 00:05:38.549 "trtype": "TCP", 00:05:38.549 "max_queue_depth": 128, 00:05:38.549 "max_io_qpairs_per_ctrlr": 127, 00:05:38.549 "in_capsule_data_size": 4096, 00:05:38.549 "max_io_size": 131072, 00:05:38.549 "io_unit_size": 131072, 00:05:38.549 "max_aq_depth": 128, 00:05:38.549 "num_shared_buffers": 511, 00:05:38.549 "buf_cache_size": 4294967295, 00:05:38.549 "dif_insert_or_strip": false, 00:05:38.549 "zcopy": false, 00:05:38.549 "c2h_success": true, 00:05:38.549 "sock_priority": 0, 00:05:38.549 "abort_timeout_sec": 1, 00:05:38.549 "ack_timeout": 0, 00:05:38.549 "data_wr_pool_size": 0 00:05:38.549 } 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 }, 00:05:38.549 { 00:05:38.549 "subsystem": "iscsi", 00:05:38.549 "config": [ 00:05:38.549 { 00:05:38.549 "method": "iscsi_set_options", 00:05:38.549 "params": { 00:05:38.549 "node_base": "iqn.2016-06.io.spdk", 00:05:38.549 "max_sessions": 128, 00:05:38.549 "max_connections_per_session": 2, 00:05:38.549 "max_queue_depth": 64, 00:05:38.549 "default_time2wait": 2, 00:05:38.549 "default_time2retain": 20, 00:05:38.549 "first_burst_length": 8192, 00:05:38.549 "immediate_data": true, 00:05:38.549 "allow_duplicated_isid": false, 00:05:38.549 "error_recovery_level": 0, 00:05:38.549 "nop_timeout": 60, 00:05:38.549 "nop_in_interval": 30, 00:05:38.549 "disable_chap": false, 00:05:38.549 "require_chap": false, 00:05:38.549 "mutual_chap": false, 00:05:38.549 "chap_group": 0, 00:05:38.549 "max_large_datain_per_connection": 64, 00:05:38.549 "max_r2t_per_connection": 4, 00:05:38.549 "pdu_pool_size": 36864, 00:05:38.549 "immediate_data_pool_size": 16384, 00:05:38.549 "data_out_pool_size": 2048 00:05:38.549 } 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 } 00:05:38.549 ] 00:05:38.549 } 00:05:38.549 03:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:38.549 03:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57049 00:05:38.549 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57049 ']' 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57049 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57049 00:05:38.550 killing process with pid 57049 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57049' 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57049 00:05:38.550 03:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57049 00:05:40.454 03:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57100 00:05:40.454 03:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.454 03:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57100 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57100 ']' 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57100 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57100 00:05:45.727 killing process with pid 57100 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57100' 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57100 00:05:45.727 03:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57100 00:05:47.631 03:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:47.632 00:05:47.632 real 0m10.515s 00:05:47.632 user 0m9.892s 00:05:47.632 sys 0m1.057s 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.632 ************************************ 00:05:47.632 END TEST skip_rpc_with_json 00:05:47.632 ************************************ 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.632 03:17:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:47.632 03:17:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.632 03:17:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.632 03:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.632 ************************************ 00:05:47.632 START TEST skip_rpc_with_delay 00:05:47.632 ************************************ 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.632 03:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.632 [2024-11-05 03:17:01.131379] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.632 00:05:47.632 real 0m0.202s 00:05:47.632 user 0m0.109s 00:05:47.632 sys 0m0.090s 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.632 ************************************ 00:05:47.632 END TEST skip_rpc_with_delay 00:05:47.632 ************************************ 00:05:47.632 03:17:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:47.632 03:17:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:47.632 03:17:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:47.632 03:17:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:47.632 03:17:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.632 03:17:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.632 03:17:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.632 ************************************ 00:05:47.632 START TEST exit_on_failed_rpc_init 00:05:47.632 ************************************ 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:47.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57228 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57228 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57228 ']' 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.632 03:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.891 [2024-11-05 03:17:01.361056] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:47.891 [2024-11-05 03:17:01.361233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57228 ] 00:05:48.151 [2024-11-05 03:17:01.533182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.151 [2024-11-05 03:17:01.659581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.088 03:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.088 [2024-11-05 03:17:02.559418] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:49.088 [2024-11-05 03:17:02.559550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57246 ] 00:05:49.348 [2024-11-05 03:17:02.737724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.348 [2024-11-05 03:17:02.891911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.348 [2024-11-05 03:17:02.892324] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:49.348 [2024-11-05 03:17:02.892362] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:49.348 [2024-11-05 03:17:02.892385] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57228 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57228 ']' 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57228 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57228 00:05:49.606 killing process with pid 57228 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57228' 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57228 00:05:49.606 03:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57228 00:05:51.510 00:05:51.510 real 0m3.857s 00:05:51.510 user 0m4.317s 00:05:51.510 sys 0m0.628s 00:05:51.510 ************************************ 00:05:51.510 END TEST exit_on_failed_rpc_init 00:05:51.510 ************************************ 00:05:51.510 03:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.510 03:17:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.770 03:17:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.770 00:05:51.770 real 0m22.123s 00:05:51.770 user 0m21.125s 00:05:51.770 sys 0m2.410s 00:05:51.770 03:17:05 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.770 03:17:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.770 ************************************ 00:05:51.770 END TEST skip_rpc 00:05:51.770 ************************************ 00:05:51.770 03:17:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.770 03:17:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.770 03:17:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.770 03:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.770 ************************************ 00:05:51.770 START TEST rpc_client 00:05:51.770 ************************************ 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.770 * Looking for test storage... 00:05:51.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.770 03:17:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.770 --rc genhtml_branch_coverage=1 00:05:51.770 --rc genhtml_function_coverage=1 00:05:51.770 --rc genhtml_legend=1 00:05:51.770 --rc geninfo_all_blocks=1 00:05:51.770 --rc geninfo_unexecuted_blocks=1 00:05:51.770 00:05:51.770 ' 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.770 --rc genhtml_branch_coverage=1 00:05:51.770 --rc genhtml_function_coverage=1 00:05:51.770 --rc genhtml_legend=1 00:05:51.770 --rc geninfo_all_blocks=1 00:05:51.770 --rc geninfo_unexecuted_blocks=1 00:05:51.770 00:05:51.770 ' 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.770 --rc genhtml_branch_coverage=1 00:05:51.770 --rc genhtml_function_coverage=1 00:05:51.770 --rc genhtml_legend=1 00:05:51.770 --rc geninfo_all_blocks=1 00:05:51.770 --rc geninfo_unexecuted_blocks=1 00:05:51.770 00:05:51.770 ' 00:05:51.770 03:17:05 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.770 --rc genhtml_branch_coverage=1 00:05:51.770 --rc genhtml_function_coverage=1 00:05:51.770 --rc genhtml_legend=1 00:05:51.770 --rc geninfo_all_blocks=1 00:05:51.770 --rc geninfo_unexecuted_blocks=1 00:05:51.770 00:05:51.770 ' 00:05:51.770 03:17:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:52.029 OK 00:05:52.029 03:17:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:52.029 00:05:52.029 real 0m0.249s 00:05:52.029 user 0m0.142s 00:05:52.029 sys 0m0.117s 00:05:52.029 03:17:05 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.029 03:17:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:52.029 ************************************ 00:05:52.029 END TEST rpc_client 00:05:52.029 ************************************ 00:05:52.029 03:17:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:52.029 03:17:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.029 03:17:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.029 03:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.029 ************************************ 00:05:52.029 START TEST json_config 00:05:52.029 ************************************ 00:05:52.029 03:17:05 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:52.029 03:17:05 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:52.029 03:17:05 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:52.029 03:17:05 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:52.029 03:17:05 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:52.029 03:17:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.029 03:17:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.029 03:17:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.029 03:17:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.029 03:17:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.029 03:17:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.029 03:17:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.029 03:17:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.029 03:17:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.029 03:17:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.029 03:17:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.029 03:17:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:52.029 03:17:05 json_config -- scripts/common.sh@345 -- # : 1 00:05:52.029 03:17:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.029 03:17:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.289 03:17:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:52.289 03:17:05 json_config -- scripts/common.sh@353 -- # local d=1 00:05:52.289 03:17:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.289 03:17:05 json_config -- scripts/common.sh@355 -- # echo 1 00:05:52.289 03:17:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.289 03:17:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:52.289 03:17:05 json_config -- scripts/common.sh@353 -- # local d=2 00:05:52.289 03:17:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.289 03:17:05 json_config -- scripts/common.sh@355 -- # echo 2 00:05:52.289 03:17:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.289 03:17:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.290 03:17:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.290 03:17:05 json_config -- scripts/common.sh@368 -- # return 0 00:05:52.290 03:17:05 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.290 03:17:05 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.290 --rc genhtml_branch_coverage=1 00:05:52.290 --rc genhtml_function_coverage=1 00:05:52.290 --rc genhtml_legend=1 00:05:52.290 --rc geninfo_all_blocks=1 00:05:52.290 --rc geninfo_unexecuted_blocks=1 00:05:52.290 00:05:52.290 ' 00:05:52.290 03:17:05 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.290 --rc genhtml_branch_coverage=1 00:05:52.290 --rc genhtml_function_coverage=1 00:05:52.290 --rc genhtml_legend=1 00:05:52.290 --rc geninfo_all_blocks=1 00:05:52.290 --rc geninfo_unexecuted_blocks=1 00:05:52.290 00:05:52.290 ' 00:05:52.290 03:17:05 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.290 --rc genhtml_branch_coverage=1 00:05:52.290 --rc genhtml_function_coverage=1 00:05:52.290 --rc genhtml_legend=1 00:05:52.290 --rc geninfo_all_blocks=1 00:05:52.290 --rc geninfo_unexecuted_blocks=1 00:05:52.290 00:05:52.290 ' 00:05:52.290 03:17:05 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.290 --rc genhtml_branch_coverage=1 00:05:52.290 --rc genhtml_function_coverage=1 00:05:52.290 --rc genhtml_legend=1 00:05:52.290 --rc geninfo_all_blocks=1 00:05:52.290 --rc geninfo_unexecuted_blocks=1 00:05:52.290 00:05:52.290 ' 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:714dfb86-ac37-497f-90fb-9f62239d38c2 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=714dfb86-ac37-497f-90fb-9f62239d38c2 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:52.290 03:17:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.290 03:17:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.290 03:17:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.290 03:17:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.290 03:17:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.290 03:17:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.290 03:17:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.290 03:17:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:52.290 03:17:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@51 -- # : 0 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.290 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.290 03:17:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.290 WARNING: No tests are enabled so not running JSON configuration tests 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:52.290 03:17:05 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:52.290 ************************************ 00:05:52.290 END TEST json_config 00:05:52.290 ************************************ 00:05:52.290 00:05:52.290 real 0m0.188s 00:05:52.290 user 0m0.116s 00:05:52.290 sys 0m0.074s 00:05:52.290 03:17:05 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.290 03:17:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.290 03:17:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:52.290 03:17:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.290 03:17:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.290 03:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.290 ************************************ 00:05:52.290 START TEST json_config_extra_key 00:05:52.290 ************************************ 00:05:52.290 03:17:05 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:52.290 03:17:05 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:52.290 03:17:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:52.290 03:17:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:52.290 03:17:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.290 03:17:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:52.290 03:17:05 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.290 03:17:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:52.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.291 --rc genhtml_branch_coverage=1 00:05:52.291 --rc genhtml_function_coverage=1 00:05:52.291 --rc genhtml_legend=1 00:05:52.291 --rc geninfo_all_blocks=1 00:05:52.291 --rc geninfo_unexecuted_blocks=1 00:05:52.291 00:05:52.291 ' 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:52.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.291 --rc genhtml_branch_coverage=1 00:05:52.291 --rc genhtml_function_coverage=1 00:05:52.291 --rc genhtml_legend=1 00:05:52.291 --rc geninfo_all_blocks=1 00:05:52.291 --rc geninfo_unexecuted_blocks=1 00:05:52.291 00:05:52.291 ' 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:52.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.291 --rc genhtml_branch_coverage=1 00:05:52.291 --rc genhtml_function_coverage=1 00:05:52.291 --rc genhtml_legend=1 00:05:52.291 --rc geninfo_all_blocks=1 00:05:52.291 --rc geninfo_unexecuted_blocks=1 00:05:52.291 00:05:52.291 ' 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:52.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.291 --rc genhtml_branch_coverage=1 00:05:52.291 --rc genhtml_function_coverage=1 00:05:52.291 --rc genhtml_legend=1 00:05:52.291 --rc geninfo_all_blocks=1 00:05:52.291 --rc geninfo_unexecuted_blocks=1 00:05:52.291 00:05:52.291 ' 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:714dfb86-ac37-497f-90fb-9f62239d38c2 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=714dfb86-ac37-497f-90fb-9f62239d38c2 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:52.291 03:17:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.291 03:17:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.291 03:17:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.291 03:17:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.291 03:17:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.291 03:17:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.291 03:17:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.291 03:17:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:52.291 03:17:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.291 03:17:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:52.291 INFO: launching applications... 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:52.291 03:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57450 00:05:52.291 Waiting for target to run... 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57450 /var/tmp/spdk_tgt.sock 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57450 ']' 00:05:52.291 03:17:05 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.291 03:17:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:52.550 [2024-11-05 03:17:06.048725] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:52.550 [2024-11-05 03:17:06.048929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57450 ] 00:05:53.118 [2024-11-05 03:17:06.533211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.118 [2024-11-05 03:17:06.669613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.686 03:17:07 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.686 00:05:53.686 03:17:07 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:53.686 INFO: shutting down applications... 00:05:53.686 03:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:53.686 03:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57450 ]] 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57450 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57450 00:05:53.686 03:17:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.253 03:17:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.253 03:17:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.253 03:17:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57450 00:05:54.253 03:17:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.852 03:17:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.852 03:17:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.852 03:17:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57450 00:05:54.852 03:17:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:55.420 03:17:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:55.420 03:17:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.420 03:17:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57450 00:05:55.420 03:17:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:55.988 03:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:55.988 03:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.988 03:17:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57450 00:05:55.988 03:17:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:56.247 03:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:56.247 03:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.247 03:17:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57450 00:05:56.247 03:17:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:56.247 03:17:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:56.247 03:17:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:56.247 SPDK target shutdown done 00:05:56.247 03:17:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:56.247 Success 00:05:56.247 03:17:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:56.247 ************************************ 00:05:56.247 END TEST json_config_extra_key 00:05:56.247 ************************************ 00:05:56.247 00:05:56.247 real 0m4.081s 00:05:56.247 user 0m3.712s 00:05:56.247 sys 0m0.673s 00:05:56.247 03:17:09 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.247 03:17:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.247 03:17:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.247 03:17:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.247 03:17:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.247 03:17:09 -- common/autotest_common.sh@10 -- # set +x 00:05:56.247 ************************************ 00:05:56.247 START TEST alias_rpc 00:05:56.247 ************************************ 00:05:56.247 03:17:09 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.506 * Looking for test storage... 00:05:56.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:56.506 03:17:09 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.506 03:17:09 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.506 03:17:09 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.506 03:17:10 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.506 03:17:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:56.506 03:17:10 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.506 03:17:10 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.506 --rc genhtml_branch_coverage=1 00:05:56.506 --rc genhtml_function_coverage=1 00:05:56.506 --rc genhtml_legend=1 00:05:56.506 --rc geninfo_all_blocks=1 00:05:56.506 --rc geninfo_unexecuted_blocks=1 00:05:56.506 00:05:56.506 ' 00:05:56.506 03:17:10 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.506 --rc genhtml_branch_coverage=1 00:05:56.506 --rc genhtml_function_coverage=1 00:05:56.506 --rc genhtml_legend=1 00:05:56.506 --rc geninfo_all_blocks=1 00:05:56.506 --rc geninfo_unexecuted_blocks=1 00:05:56.506 00:05:56.506 ' 00:05:56.506 03:17:10 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.506 --rc genhtml_branch_coverage=1 00:05:56.506 --rc genhtml_function_coverage=1 00:05:56.506 --rc genhtml_legend=1 00:05:56.506 --rc geninfo_all_blocks=1 00:05:56.506 --rc geninfo_unexecuted_blocks=1 00:05:56.506 00:05:56.506 ' 00:05:56.506 03:17:10 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.506 --rc genhtml_branch_coverage=1 00:05:56.507 --rc genhtml_function_coverage=1 00:05:56.507 --rc genhtml_legend=1 00:05:56.507 --rc geninfo_all_blocks=1 00:05:56.507 --rc geninfo_unexecuted_blocks=1 00:05:56.507 00:05:56.507 ' 00:05:56.507 03:17:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.507 03:17:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57555 00:05:56.507 03:17:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.507 03:17:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57555 00:05:56.507 03:17:10 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57555 ']' 00:05:56.507 03:17:10 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.507 03:17:10 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.507 03:17:10 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.507 03:17:10 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.507 03:17:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.765 [2024-11-05 03:17:10.207279] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:56.765 [2024-11-05 03:17:10.207503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57555 ] 00:05:56.765 [2024-11-05 03:17:10.392992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.024 [2024-11-05 03:17:10.517710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.958 03:17:11 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.958 03:17:11 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:57.958 03:17:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:58.217 03:17:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57555 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57555 ']' 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57555 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57555 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:58.217 killing process with pid 57555 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57555' 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@971 -- # kill 57555 00:05:58.217 03:17:11 alias_rpc -- common/autotest_common.sh@976 -- # wait 57555 00:06:00.118 00:06:00.118 real 0m3.835s 00:06:00.118 user 0m3.965s 00:06:00.118 sys 0m0.648s 00:06:00.118 03:17:13 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.118 03:17:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.118 ************************************ 00:06:00.118 END TEST alias_rpc 00:06:00.118 ************************************ 00:06:00.377 03:17:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:00.377 03:17:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:00.377 03:17:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.377 03:17:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.377 03:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.377 ************************************ 00:06:00.377 START TEST spdkcli_tcp 00:06:00.377 ************************************ 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:00.377 * Looking for test storage... 00:06:00.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.377 03:17:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.377 --rc genhtml_branch_coverage=1 00:06:00.377 --rc genhtml_function_coverage=1 00:06:00.377 --rc genhtml_legend=1 00:06:00.377 --rc geninfo_all_blocks=1 00:06:00.377 --rc geninfo_unexecuted_blocks=1 00:06:00.377 00:06:00.377 ' 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.377 --rc genhtml_branch_coverage=1 00:06:00.377 --rc genhtml_function_coverage=1 00:06:00.377 --rc genhtml_legend=1 00:06:00.377 --rc geninfo_all_blocks=1 00:06:00.377 --rc geninfo_unexecuted_blocks=1 00:06:00.377 00:06:00.377 ' 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.377 --rc genhtml_branch_coverage=1 00:06:00.377 --rc genhtml_function_coverage=1 00:06:00.377 --rc genhtml_legend=1 00:06:00.377 --rc geninfo_all_blocks=1 00:06:00.377 --rc geninfo_unexecuted_blocks=1 00:06:00.377 00:06:00.377 ' 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.377 --rc genhtml_branch_coverage=1 00:06:00.377 --rc genhtml_function_coverage=1 00:06:00.377 --rc genhtml_legend=1 00:06:00.377 --rc geninfo_all_blocks=1 00:06:00.377 --rc geninfo_unexecuted_blocks=1 00:06:00.377 00:06:00.377 ' 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57657 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57657 00:06:00.377 03:17:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57657 ']' 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.377 03:17:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.636 [2024-11-05 03:17:14.090751] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:00.636 [2024-11-05 03:17:14.090918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57657 ] 00:06:00.894 [2024-11-05 03:17:14.278115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.894 [2024-11-05 03:17:14.396320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.894 [2024-11-05 03:17:14.396359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.831 03:17:15 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.831 03:17:15 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:01.831 03:17:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57679 00:06:01.831 03:17:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.831 03:17:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.091 [ 00:06:02.091 "bdev_malloc_delete", 00:06:02.091 "bdev_malloc_create", 00:06:02.091 "bdev_null_resize", 00:06:02.091 "bdev_null_delete", 00:06:02.091 "bdev_null_create", 00:06:02.091 "bdev_nvme_cuse_unregister", 00:06:02.091 "bdev_nvme_cuse_register", 00:06:02.091 "bdev_opal_new_user", 00:06:02.091 "bdev_opal_set_lock_state", 00:06:02.091 "bdev_opal_delete", 00:06:02.091 "bdev_opal_get_info", 00:06:02.091 "bdev_opal_create", 00:06:02.091 "bdev_nvme_opal_revert", 00:06:02.091 "bdev_nvme_opal_init", 00:06:02.091 "bdev_nvme_send_cmd", 00:06:02.091 "bdev_nvme_set_keys", 00:06:02.091 "bdev_nvme_get_path_iostat", 00:06:02.091 "bdev_nvme_get_mdns_discovery_info", 00:06:02.091 "bdev_nvme_stop_mdns_discovery", 00:06:02.091 "bdev_nvme_start_mdns_discovery", 00:06:02.091 "bdev_nvme_set_multipath_policy", 00:06:02.091 "bdev_nvme_set_preferred_path", 00:06:02.091 "bdev_nvme_get_io_paths", 00:06:02.091 "bdev_nvme_remove_error_injection", 00:06:02.091 "bdev_nvme_add_error_injection", 00:06:02.091 "bdev_nvme_get_discovery_info", 00:06:02.091 "bdev_nvme_stop_discovery", 00:06:02.091 "bdev_nvme_start_discovery", 00:06:02.091 "bdev_nvme_get_controller_health_info", 00:06:02.091 "bdev_nvme_disable_controller", 00:06:02.091 "bdev_nvme_enable_controller", 00:06:02.091 "bdev_nvme_reset_controller", 00:06:02.091 "bdev_nvme_get_transport_statistics", 00:06:02.091 "bdev_nvme_apply_firmware", 00:06:02.092 "bdev_nvme_detach_controller", 00:06:02.092 "bdev_nvme_get_controllers", 00:06:02.092 "bdev_nvme_attach_controller", 00:06:02.092 "bdev_nvme_set_hotplug", 00:06:02.092 "bdev_nvme_set_options", 00:06:02.092 "bdev_passthru_delete", 00:06:02.092 "bdev_passthru_create", 00:06:02.092 "bdev_lvol_set_parent_bdev", 00:06:02.092 "bdev_lvol_set_parent", 00:06:02.092 "bdev_lvol_check_shallow_copy", 00:06:02.092 "bdev_lvol_start_shallow_copy", 00:06:02.092 "bdev_lvol_grow_lvstore", 00:06:02.092 "bdev_lvol_get_lvols", 00:06:02.092 "bdev_lvol_get_lvstores", 00:06:02.092 "bdev_lvol_delete", 00:06:02.092 "bdev_lvol_set_read_only", 00:06:02.092 "bdev_lvol_resize", 00:06:02.092 "bdev_lvol_decouple_parent", 00:06:02.092 "bdev_lvol_inflate", 00:06:02.092 "bdev_lvol_rename", 00:06:02.092 "bdev_lvol_clone_bdev", 00:06:02.092 "bdev_lvol_clone", 00:06:02.092 "bdev_lvol_snapshot", 00:06:02.092 "bdev_lvol_create", 00:06:02.092 "bdev_lvol_delete_lvstore", 00:06:02.092 "bdev_lvol_rename_lvstore", 00:06:02.092 "bdev_lvol_create_lvstore", 00:06:02.092 "bdev_raid_set_options", 00:06:02.092 "bdev_raid_remove_base_bdev", 00:06:02.092 "bdev_raid_add_base_bdev", 00:06:02.092 "bdev_raid_delete", 00:06:02.092 "bdev_raid_create", 00:06:02.092 "bdev_raid_get_bdevs", 00:06:02.092 "bdev_error_inject_error", 00:06:02.092 "bdev_error_delete", 00:06:02.092 "bdev_error_create", 00:06:02.092 "bdev_split_delete", 00:06:02.092 "bdev_split_create", 00:06:02.092 "bdev_delay_delete", 00:06:02.092 "bdev_delay_create", 00:06:02.092 "bdev_delay_update_latency", 00:06:02.092 "bdev_zone_block_delete", 00:06:02.092 "bdev_zone_block_create", 00:06:02.092 "blobfs_create", 00:06:02.092 "blobfs_detect", 00:06:02.092 "blobfs_set_cache_size", 00:06:02.092 "bdev_aio_delete", 00:06:02.092 "bdev_aio_rescan", 00:06:02.092 "bdev_aio_create", 00:06:02.092 "bdev_ftl_set_property", 00:06:02.092 "bdev_ftl_get_properties", 00:06:02.092 "bdev_ftl_get_stats", 00:06:02.092 "bdev_ftl_unmap", 00:06:02.092 "bdev_ftl_unload", 00:06:02.092 "bdev_ftl_delete", 00:06:02.092 "bdev_ftl_load", 00:06:02.092 "bdev_ftl_create", 00:06:02.092 "bdev_virtio_attach_controller", 00:06:02.092 "bdev_virtio_scsi_get_devices", 00:06:02.092 "bdev_virtio_detach_controller", 00:06:02.092 "bdev_virtio_blk_set_hotplug", 00:06:02.092 "bdev_iscsi_delete", 00:06:02.092 "bdev_iscsi_create", 00:06:02.092 "bdev_iscsi_set_options", 00:06:02.092 "accel_error_inject_error", 00:06:02.092 "ioat_scan_accel_module", 00:06:02.092 "dsa_scan_accel_module", 00:06:02.092 "iaa_scan_accel_module", 00:06:02.092 "keyring_file_remove_key", 00:06:02.092 "keyring_file_add_key", 00:06:02.092 "keyring_linux_set_options", 00:06:02.092 "fsdev_aio_delete", 00:06:02.092 "fsdev_aio_create", 00:06:02.092 "iscsi_get_histogram", 00:06:02.092 "iscsi_enable_histogram", 00:06:02.092 "iscsi_set_options", 00:06:02.092 "iscsi_get_auth_groups", 00:06:02.092 "iscsi_auth_group_remove_secret", 00:06:02.092 "iscsi_auth_group_add_secret", 00:06:02.092 "iscsi_delete_auth_group", 00:06:02.092 "iscsi_create_auth_group", 00:06:02.092 "iscsi_set_discovery_auth", 00:06:02.092 "iscsi_get_options", 00:06:02.093 "iscsi_target_node_request_logout", 00:06:02.093 "iscsi_target_node_set_redirect", 00:06:02.093 "iscsi_target_node_set_auth", 00:06:02.093 "iscsi_target_node_add_lun", 00:06:02.093 "iscsi_get_stats", 00:06:02.093 "iscsi_get_connections", 00:06:02.093 "iscsi_portal_group_set_auth", 00:06:02.093 "iscsi_start_portal_group", 00:06:02.093 "iscsi_delete_portal_group", 00:06:02.093 "iscsi_create_portal_group", 00:06:02.093 "iscsi_get_portal_groups", 00:06:02.093 "iscsi_delete_target_node", 00:06:02.093 "iscsi_target_node_remove_pg_ig_maps", 00:06:02.093 "iscsi_target_node_add_pg_ig_maps", 00:06:02.093 "iscsi_create_target_node", 00:06:02.093 "iscsi_get_target_nodes", 00:06:02.093 "iscsi_delete_initiator_group", 00:06:02.093 "iscsi_initiator_group_remove_initiators", 00:06:02.093 "iscsi_initiator_group_add_initiators", 00:06:02.093 "iscsi_create_initiator_group", 00:06:02.093 "iscsi_get_initiator_groups", 00:06:02.093 "nvmf_set_crdt", 00:06:02.093 "nvmf_set_config", 00:06:02.093 "nvmf_set_max_subsystems", 00:06:02.093 "nvmf_stop_mdns_prr", 00:06:02.093 "nvmf_publish_mdns_prr", 00:06:02.093 "nvmf_subsystem_get_listeners", 00:06:02.093 "nvmf_subsystem_get_qpairs", 00:06:02.093 "nvmf_subsystem_get_controllers", 00:06:02.093 "nvmf_get_stats", 00:06:02.093 "nvmf_get_transports", 00:06:02.093 "nvmf_create_transport", 00:06:02.093 "nvmf_get_targets", 00:06:02.093 "nvmf_delete_target", 00:06:02.093 "nvmf_create_target", 00:06:02.093 "nvmf_subsystem_allow_any_host", 00:06:02.093 "nvmf_subsystem_set_keys", 00:06:02.093 "nvmf_subsystem_remove_host", 00:06:02.093 "nvmf_subsystem_add_host", 00:06:02.093 "nvmf_ns_remove_host", 00:06:02.093 "nvmf_ns_add_host", 00:06:02.093 "nvmf_subsystem_remove_ns", 00:06:02.093 "nvmf_subsystem_set_ns_ana_group", 00:06:02.093 "nvmf_subsystem_add_ns", 00:06:02.093 "nvmf_subsystem_listener_set_ana_state", 00:06:02.093 "nvmf_discovery_get_referrals", 00:06:02.093 "nvmf_discovery_remove_referral", 00:06:02.093 "nvmf_discovery_add_referral", 00:06:02.093 "nvmf_subsystem_remove_listener", 00:06:02.093 "nvmf_subsystem_add_listener", 00:06:02.093 "nvmf_delete_subsystem", 00:06:02.093 "nvmf_create_subsystem", 00:06:02.093 "nvmf_get_subsystems", 00:06:02.093 "env_dpdk_get_mem_stats", 00:06:02.093 "nbd_get_disks", 00:06:02.093 "nbd_stop_disk", 00:06:02.093 "nbd_start_disk", 00:06:02.093 "ublk_recover_disk", 00:06:02.093 "ublk_get_disks", 00:06:02.093 "ublk_stop_disk", 00:06:02.093 "ublk_start_disk", 00:06:02.093 "ublk_destroy_target", 00:06:02.093 "ublk_create_target", 00:06:02.093 "virtio_blk_create_transport", 00:06:02.093 "virtio_blk_get_transports", 00:06:02.093 "vhost_controller_set_coalescing", 00:06:02.093 "vhost_get_controllers", 00:06:02.093 "vhost_delete_controller", 00:06:02.093 "vhost_create_blk_controller", 00:06:02.093 "vhost_scsi_controller_remove_target", 00:06:02.093 "vhost_scsi_controller_add_target", 00:06:02.093 "vhost_start_scsi_controller", 00:06:02.093 "vhost_create_scsi_controller", 00:06:02.093 "thread_set_cpumask", 00:06:02.093 "scheduler_set_options", 00:06:02.093 "framework_get_governor", 00:06:02.093 "framework_get_scheduler", 00:06:02.093 "framework_set_scheduler", 00:06:02.093 "framework_get_reactors", 00:06:02.093 "thread_get_io_channels", 00:06:02.093 "thread_get_pollers", 00:06:02.093 "thread_get_stats", 00:06:02.093 "framework_monitor_context_switch", 00:06:02.093 "spdk_kill_instance", 00:06:02.093 "log_enable_timestamps", 00:06:02.093 "log_get_flags", 00:06:02.093 "log_clear_flag", 00:06:02.093 "log_set_flag", 00:06:02.093 "log_get_level", 00:06:02.093 "log_set_level", 00:06:02.093 "log_get_print_level", 00:06:02.093 "log_set_print_level", 00:06:02.093 "framework_enable_cpumask_locks", 00:06:02.093 "framework_disable_cpumask_locks", 00:06:02.093 "framework_wait_init", 00:06:02.093 "framework_start_init", 00:06:02.093 "scsi_get_devices", 00:06:02.094 "bdev_get_histogram", 00:06:02.094 "bdev_enable_histogram", 00:06:02.094 "bdev_set_qos_limit", 00:06:02.094 "bdev_set_qd_sampling_period", 00:06:02.094 "bdev_get_bdevs", 00:06:02.094 "bdev_reset_iostat", 00:06:02.094 "bdev_get_iostat", 00:06:02.094 "bdev_examine", 00:06:02.094 "bdev_wait_for_examine", 00:06:02.094 "bdev_set_options", 00:06:02.094 "accel_get_stats", 00:06:02.094 "accel_set_options", 00:06:02.094 "accel_set_driver", 00:06:02.094 "accel_crypto_key_destroy", 00:06:02.094 "accel_crypto_keys_get", 00:06:02.094 "accel_crypto_key_create", 00:06:02.094 "accel_assign_opc", 00:06:02.094 "accel_get_module_info", 00:06:02.094 "accel_get_opc_assignments", 00:06:02.094 "vmd_rescan", 00:06:02.094 "vmd_remove_device", 00:06:02.094 "vmd_enable", 00:06:02.094 "sock_get_default_impl", 00:06:02.094 "sock_set_default_impl", 00:06:02.094 "sock_impl_set_options", 00:06:02.094 "sock_impl_get_options", 00:06:02.094 "iobuf_get_stats", 00:06:02.094 "iobuf_set_options", 00:06:02.094 "keyring_get_keys", 00:06:02.094 "framework_get_pci_devices", 00:06:02.094 "framework_get_config", 00:06:02.094 "framework_get_subsystems", 00:06:02.094 "fsdev_set_opts", 00:06:02.094 "fsdev_get_opts", 00:06:02.094 "trace_get_info", 00:06:02.094 "trace_get_tpoint_group_mask", 00:06:02.094 "trace_disable_tpoint_group", 00:06:02.094 "trace_enable_tpoint_group", 00:06:02.094 "trace_clear_tpoint_mask", 00:06:02.094 "trace_set_tpoint_mask", 00:06:02.094 "notify_get_notifications", 00:06:02.094 "notify_get_types", 00:06:02.094 "spdk_get_version", 00:06:02.094 "rpc_get_methods" 00:06:02.094 ] 00:06:02.094 03:17:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.094 03:17:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:02.094 03:17:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57657 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57657 ']' 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57657 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57657 00:06:02.094 killing process with pid 57657 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57657' 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57657 00:06:02.094 03:17:15 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57657 00:06:04.000 00:06:04.000 real 0m3.810s 00:06:04.000 user 0m6.888s 00:06:04.000 sys 0m0.654s 00:06:04.000 03:17:17 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.000 03:17:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.000 ************************************ 00:06:04.000 END TEST spdkcli_tcp 00:06:04.000 ************************************ 00:06:04.000 03:17:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.000 03:17:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.000 03:17:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.000 03:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:04.001 ************************************ 00:06:04.001 START TEST dpdk_mem_utility 00:06:04.001 ************************************ 00:06:04.001 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.277 * Looking for test storage... 00:06:04.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.277 03:17:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.277 --rc genhtml_branch_coverage=1 00:06:04.277 --rc genhtml_function_coverage=1 00:06:04.277 --rc genhtml_legend=1 00:06:04.277 --rc geninfo_all_blocks=1 00:06:04.277 --rc geninfo_unexecuted_blocks=1 00:06:04.277 00:06:04.277 ' 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.277 --rc genhtml_branch_coverage=1 00:06:04.277 --rc genhtml_function_coverage=1 00:06:04.277 --rc genhtml_legend=1 00:06:04.277 --rc geninfo_all_blocks=1 00:06:04.277 --rc geninfo_unexecuted_blocks=1 00:06:04.277 00:06:04.277 ' 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.277 --rc genhtml_branch_coverage=1 00:06:04.277 --rc genhtml_function_coverage=1 00:06:04.277 --rc genhtml_legend=1 00:06:04.277 --rc geninfo_all_blocks=1 00:06:04.277 --rc geninfo_unexecuted_blocks=1 00:06:04.277 00:06:04.277 ' 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.277 --rc genhtml_branch_coverage=1 00:06:04.277 --rc genhtml_function_coverage=1 00:06:04.277 --rc genhtml_legend=1 00:06:04.277 --rc geninfo_all_blocks=1 00:06:04.277 --rc geninfo_unexecuted_blocks=1 00:06:04.277 00:06:04.277 ' 00:06:04.277 03:17:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.277 03:17:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57780 00:06:04.277 03:17:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.277 03:17:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57780 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57780 ']' 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.277 03:17:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.557 [2024-11-05 03:17:17.954723] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:04.557 [2024-11-05 03:17:17.954928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:06:04.557 [2024-11-05 03:17:18.139621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.816 [2024-11-05 03:17:18.257697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.757 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:05.757 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:05.757 03:17:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:05.757 03:17:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:05.757 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.757 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.757 { 00:06:05.757 "filename": "/tmp/spdk_mem_dump.txt" 00:06:05.757 } 00:06:05.757 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.757 03:17:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:05.757 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:05.757 1 heaps totaling size 816.000000 MiB 00:06:05.757 size: 816.000000 MiB heap id: 0 00:06:05.757 end heaps---------- 00:06:05.757 9 mempools totaling size 595.772034 MiB 00:06:05.757 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:05.757 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:05.757 size: 92.545471 MiB name: bdev_io_57780 00:06:05.757 size: 50.003479 MiB name: msgpool_57780 00:06:05.757 size: 36.509338 MiB name: fsdev_io_57780 00:06:05.757 size: 21.763794 MiB name: PDU_Pool 00:06:05.757 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:05.757 size: 4.133484 MiB name: evtpool_57780 00:06:05.757 size: 0.026123 MiB name: Session_Pool 00:06:05.757 end mempools------- 00:06:05.757 6 memzones totaling size 4.142822 MiB 00:06:05.757 size: 1.000366 MiB name: RG_ring_0_57780 00:06:05.757 size: 1.000366 MiB name: RG_ring_1_57780 00:06:05.757 size: 1.000366 MiB name: RG_ring_4_57780 00:06:05.757 size: 1.000366 MiB name: RG_ring_5_57780 00:06:05.757 size: 0.125366 MiB name: RG_ring_2_57780 00:06:05.757 size: 0.015991 MiB name: RG_ring_3_57780 00:06:05.757 end memzones------- 00:06:05.757 03:17:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:05.757 heap id: 0 total size: 816.000000 MiB number of busy elements: 316 number of free elements: 18 00:06:05.757 list of free elements. size: 16.791138 MiB 00:06:05.757 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:05.758 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:05.758 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:05.758 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:05.758 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:05.758 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:05.758 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:05.758 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:05.758 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:05.758 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:05.758 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:05.758 element at address: 0x20001ac00000 with size: 0.561462 MiB 00:06:05.758 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:05.758 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:05.758 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:05.758 element at address: 0x200012c00000 with size: 0.443481 MiB 00:06:05.758 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:05.758 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:05.758 list of standard malloc elements. size: 199.287964 MiB 00:06:05.758 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:05.758 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:05.758 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:05.758 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:05.758 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:05.758 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:05.758 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:05.758 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:05.758 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:05.758 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:05.758 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:05.758 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:05.758 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:05.758 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:05.759 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:05.759 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:05.759 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:05.759 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:05.759 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:05.760 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:05.760 list of memzone associated elements. size: 599.920898 MiB 00:06:05.760 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:05.760 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:05.760 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:05.760 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:05.760 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:05.760 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57780_0 00:06:05.760 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:05.760 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57780_0 00:06:05.760 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:05.760 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57780_0 00:06:05.760 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:05.760 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:05.760 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:05.760 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:05.760 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:05.760 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57780_0 00:06:05.760 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:05.760 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57780 00:06:05.760 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:05.760 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57780 00:06:05.760 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:05.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:05.760 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:05.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:05.760 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:05.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:05.760 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:05.760 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:05.760 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57780 00:06:05.760 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57780 00:06:05.760 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57780 00:06:05.760 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57780 00:06:05.760 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:05.760 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57780 00:06:05.760 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:05.760 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57780 00:06:05.760 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:05.760 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:05.760 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:05.760 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:05.760 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:05.760 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:05.760 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:05.760 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57780 00:06:05.760 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:05.760 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57780 00:06:05.760 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:05.760 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:05.760 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:05.760 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:05.760 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:05.760 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57780 00:06:05.760 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:05.760 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:05.760 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:05.760 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57780 00:06:05.760 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:05.760 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57780 00:06:05.760 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:05.760 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57780 00:06:05.760 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:05.760 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:05.760 03:17:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:05.760 03:17:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57780 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57780 ']' 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57780 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57780 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:05.760 killing process with pid 57780 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57780' 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57780 00:06:05.760 03:17:19 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57780 00:06:07.665 00:06:07.665 real 0m3.584s 00:06:07.665 user 0m3.619s 00:06:07.665 sys 0m0.594s 00:06:07.665 03:17:21 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.665 ************************************ 00:06:07.665 END TEST dpdk_mem_utility 00:06:07.665 ************************************ 00:06:07.665 03:17:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.665 03:17:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:07.665 03:17:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:07.665 03:17:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.665 03:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.665 ************************************ 00:06:07.665 START TEST event 00:06:07.665 ************************************ 00:06:07.665 03:17:21 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:07.925 * Looking for test storage... 00:06:07.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:07.925 03:17:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.925 03:17:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.925 03:17:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.925 03:17:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.925 03:17:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.925 03:17:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.925 03:17:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.925 03:17:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.925 03:17:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.925 03:17:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.925 03:17:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.925 03:17:21 event -- scripts/common.sh@344 -- # case "$op" in 00:06:07.925 03:17:21 event -- scripts/common.sh@345 -- # : 1 00:06:07.925 03:17:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.925 03:17:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.925 03:17:21 event -- scripts/common.sh@365 -- # decimal 1 00:06:07.925 03:17:21 event -- scripts/common.sh@353 -- # local d=1 00:06:07.925 03:17:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.925 03:17:21 event -- scripts/common.sh@355 -- # echo 1 00:06:07.925 03:17:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.925 03:17:21 event -- scripts/common.sh@366 -- # decimal 2 00:06:07.925 03:17:21 event -- scripts/common.sh@353 -- # local d=2 00:06:07.925 03:17:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.925 03:17:21 event -- scripts/common.sh@355 -- # echo 2 00:06:07.925 03:17:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.925 03:17:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.925 03:17:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.925 03:17:21 event -- scripts/common.sh@368 -- # return 0 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:07.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.925 --rc genhtml_branch_coverage=1 00:06:07.925 --rc genhtml_function_coverage=1 00:06:07.925 --rc genhtml_legend=1 00:06:07.925 --rc geninfo_all_blocks=1 00:06:07.925 --rc geninfo_unexecuted_blocks=1 00:06:07.925 00:06:07.925 ' 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:07.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.925 --rc genhtml_branch_coverage=1 00:06:07.925 --rc genhtml_function_coverage=1 00:06:07.925 --rc genhtml_legend=1 00:06:07.925 --rc geninfo_all_blocks=1 00:06:07.925 --rc geninfo_unexecuted_blocks=1 00:06:07.925 00:06:07.925 ' 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:07.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.925 --rc genhtml_branch_coverage=1 00:06:07.925 --rc genhtml_function_coverage=1 00:06:07.925 --rc genhtml_legend=1 00:06:07.925 --rc geninfo_all_blocks=1 00:06:07.925 --rc geninfo_unexecuted_blocks=1 00:06:07.925 00:06:07.925 ' 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:07.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.925 --rc genhtml_branch_coverage=1 00:06:07.925 --rc genhtml_function_coverage=1 00:06:07.925 --rc genhtml_legend=1 00:06:07.925 --rc geninfo_all_blocks=1 00:06:07.925 --rc geninfo_unexecuted_blocks=1 00:06:07.925 00:06:07.925 ' 00:06:07.925 03:17:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:07.925 03:17:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:07.925 03:17:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:07.925 03:17:21 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.925 03:17:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.925 ************************************ 00:06:07.925 START TEST event_perf 00:06:07.925 ************************************ 00:06:07.925 03:17:21 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.925 Running I/O for 1 seconds...[2024-11-05 03:17:21.506098] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:07.925 [2024-11-05 03:17:21.506410] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57888 ] 00:06:08.185 [2024-11-05 03:17:21.693872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.444 [2024-11-05 03:17:21.823489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.444 [2024-11-05 03:17:21.823600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.444 Running I/O for 1 seconds...[2024-11-05 03:17:21.823737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.444 [2024-11-05 03:17:21.823754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.382 00:06:09.382 lcore 0: 208588 00:06:09.382 lcore 1: 208588 00:06:09.382 lcore 2: 208589 00:06:09.382 lcore 3: 208589 00:06:09.641 done. 00:06:09.641 00:06:09.641 real 0m1.584s 00:06:09.641 user 0m4.321s 00:06:09.641 sys 0m0.138s 00:06:09.641 03:17:23 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.641 ************************************ 00:06:09.641 END TEST event_perf 00:06:09.641 03:17:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.641 ************************************ 00:06:09.641 03:17:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:09.641 03:17:23 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:09.641 03:17:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.641 03:17:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.641 ************************************ 00:06:09.641 START TEST event_reactor 00:06:09.641 ************************************ 00:06:09.641 03:17:23 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:09.641 [2024-11-05 03:17:23.142069] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:09.641 [2024-11-05 03:17:23.142230] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:06:09.901 [2024-11-05 03:17:23.312924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.901 [2024-11-05 03:17:23.416728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.280 test_start 00:06:11.280 oneshot 00:06:11.280 tick 100 00:06:11.280 tick 100 00:06:11.280 tick 250 00:06:11.280 tick 100 00:06:11.280 tick 100 00:06:11.280 tick 100 00:06:11.280 tick 250 00:06:11.280 tick 500 00:06:11.280 tick 100 00:06:11.280 tick 100 00:06:11.280 tick 250 00:06:11.280 tick 100 00:06:11.280 tick 100 00:06:11.280 test_end 00:06:11.280 ************************************ 00:06:11.280 END TEST event_reactor 00:06:11.280 ************************************ 00:06:11.280 00:06:11.280 real 0m1.512s 00:06:11.280 user 0m1.313s 00:06:11.280 sys 0m0.090s 00:06:11.280 03:17:24 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.280 03:17:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:11.280 03:17:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.280 03:17:24 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:11.280 03:17:24 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.280 03:17:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.280 ************************************ 00:06:11.280 START TEST event_reactor_perf 00:06:11.280 ************************************ 00:06:11.280 03:17:24 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.280 [2024-11-05 03:17:24.715094] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:11.280 [2024-11-05 03:17:24.715480] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57964 ] 00:06:11.280 [2024-11-05 03:17:24.897352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.539 [2024-11-05 03:17:25.012604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.918 test_start 00:06:12.918 test_end 00:06:12.918 Performance: 331818 events per second 00:06:12.918 ************************************ 00:06:12.918 END TEST event_reactor_perf 00:06:12.918 ************************************ 00:06:12.918 00:06:12.918 real 0m1.550s 00:06:12.918 user 0m1.334s 00:06:12.918 sys 0m0.106s 00:06:12.918 03:17:26 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:12.918 03:17:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.918 03:17:26 event -- event/event.sh@49 -- # uname -s 00:06:12.918 03:17:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.918 03:17:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:12.918 03:17:26 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:12.918 03:17:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.918 03:17:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.918 ************************************ 00:06:12.918 START TEST event_scheduler 00:06:12.918 ************************************ 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:12.918 * Looking for test storage... 00:06:12.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.918 03:17:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.918 --rc genhtml_branch_coverage=1 00:06:12.918 --rc genhtml_function_coverage=1 00:06:12.918 --rc genhtml_legend=1 00:06:12.918 --rc geninfo_all_blocks=1 00:06:12.918 --rc geninfo_unexecuted_blocks=1 00:06:12.918 00:06:12.918 ' 00:06:12.918 03:17:26 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.918 --rc genhtml_branch_coverage=1 00:06:12.918 --rc genhtml_function_coverage=1 00:06:12.918 --rc genhtml_legend=1 00:06:12.919 --rc geninfo_all_blocks=1 00:06:12.919 --rc geninfo_unexecuted_blocks=1 00:06:12.919 00:06:12.919 ' 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.919 --rc genhtml_branch_coverage=1 00:06:12.919 --rc genhtml_function_coverage=1 00:06:12.919 --rc genhtml_legend=1 00:06:12.919 --rc geninfo_all_blocks=1 00:06:12.919 --rc geninfo_unexecuted_blocks=1 00:06:12.919 00:06:12.919 ' 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.919 --rc genhtml_branch_coverage=1 00:06:12.919 --rc genhtml_function_coverage=1 00:06:12.919 --rc genhtml_legend=1 00:06:12.919 --rc geninfo_all_blocks=1 00:06:12.919 --rc geninfo_unexecuted_blocks=1 00:06:12.919 00:06:12.919 ' 00:06:12.919 03:17:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.919 03:17:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58029 00:06:12.919 03:17:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.919 03:17:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.919 03:17:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58029 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58029 ']' 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:12.919 03:17:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.178 [2024-11-05 03:17:26.572521] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:13.178 [2024-11-05 03:17:26.572931] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58029 ] 00:06:13.178 [2024-11-05 03:17:26.763500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.437 [2024-11-05 03:17:26.925475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.437 [2024-11-05 03:17:26.925564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.437 [2024-11-05 03:17:26.925635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.437 [2024-11-05 03:17:26.925650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.007 03:17:27 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.007 03:17:27 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:14.007 03:17:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:14.007 03:17:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.007 03:17:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.007 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:14.007 POWER: Cannot set governor of lcore 0 to userspace 00:06:14.007 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:14.007 POWER: Cannot set governor of lcore 0 to performance 00:06:14.007 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:14.007 POWER: Cannot set governor of lcore 0 to userspace 00:06:14.007 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:14.007 POWER: Cannot set governor of lcore 0 to userspace 00:06:14.007 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:14.007 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:14.008 POWER: Unable to set Power Management Environment for lcore 0 00:06:14.008 [2024-11-05 03:17:27.524142] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:14.008 [2024-11-05 03:17:27.524168] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:14.008 [2024-11-05 03:17:27.524182] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:14.008 [2024-11-05 03:17:27.524207] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:14.008 [2024-11-05 03:17:27.524218] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:14.008 [2024-11-05 03:17:27.524232] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:14.008 03:17:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.008 03:17:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:14.008 03:17:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.008 03:17:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 [2024-11-05 03:17:27.825837] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:14.267 03:17:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:14.267 03:17:27 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.267 03:17:27 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 ************************************ 00:06:14.267 START TEST scheduler_create_thread 00:06:14.267 ************************************ 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 2 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 3 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 4 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 5 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 6 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 7 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 8 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.267 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.526 9 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.526 10 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.526 03:17:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.463 03:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.463 03:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.463 03:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.463 03:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.463 03:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.399 ************************************ 00:06:16.399 END TEST scheduler_create_thread 00:06:16.399 ************************************ 00:06:16.399 03:17:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.399 00:06:16.399 real 0m2.135s 00:06:16.399 user 0m0.019s 00:06:16.399 sys 0m0.007s 00:06:16.399 03:17:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.399 03:17:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.399 03:17:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.399 03:17:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58029 00:06:16.399 03:17:30 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58029 ']' 00:06:16.399 03:17:30 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58029 00:06:16.399 03:17:30 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:16.399 03:17:30 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.399 03:17:30 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58029 00:06:16.658 killing process with pid 58029 00:06:16.658 03:17:30 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:16.658 03:17:30 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:16.658 03:17:30 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58029' 00:06:16.658 03:17:30 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58029 00:06:16.658 03:17:30 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58029 00:06:16.917 [2024-11-05 03:17:30.454208] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:17.857 00:06:17.857 real 0m5.144s 00:06:17.857 user 0m8.697s 00:06:17.857 sys 0m0.486s 00:06:17.857 03:17:31 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.857 03:17:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.857 ************************************ 00:06:17.857 END TEST event_scheduler 00:06:17.857 ************************************ 00:06:17.857 03:17:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.857 03:17:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.857 03:17:31 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.857 03:17:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.857 03:17:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.857 ************************************ 00:06:17.857 START TEST app_repeat 00:06:17.857 ************************************ 00:06:17.857 03:17:31 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:17.857 03:17:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.857 03:17:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.857 03:17:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:17.857 03:17:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.857 03:17:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:17.857 03:17:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:17.858 Process app_repeat pid: 58135 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58135 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58135' 00:06:17.858 spdk_app_start Round 0 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.858 03:17:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:06:17.858 03:17:31 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:06:17.858 03:17:31 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.858 03:17:31 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.858 03:17:31 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.858 03:17:31 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.858 03:17:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.121 [2024-11-05 03:17:31.537730] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:18.121 [2024-11-05 03:17:31.538053] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58135 ] 00:06:18.121 [2024-11-05 03:17:31.708187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.380 [2024-11-05 03:17:31.826905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.380 [2024-11-05 03:17:31.826916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.948 03:17:32 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.948 03:17:32 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:18.948 03:17:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.208 Malloc0 00:06:19.208 03:17:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.774 Malloc1 00:06:19.774 03:17:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.774 /dev/nbd0 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.774 03:17:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:19.774 03:17:33 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.033 1+0 records in 00:06:20.033 1+0 records out 00:06:20.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629029 s, 6.5 MB/s 00:06:20.033 03:17:33 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.033 03:17:33 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:20.033 03:17:33 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.033 03:17:33 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:20.033 03:17:33 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:20.033 03:17:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.033 03:17:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.033 03:17:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.328 /dev/nbd1 00:06:20.328 03:17:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.328 03:17:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.328 1+0 records in 00:06:20.328 1+0 records out 00:06:20.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248743 s, 16.5 MB/s 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:20.328 03:17:33 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:20.328 03:17:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.328 03:17:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.328 03:17:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.328 03:17:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.328 03:17:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.588 { 00:06:20.588 "nbd_device": "/dev/nbd0", 00:06:20.588 "bdev_name": "Malloc0" 00:06:20.588 }, 00:06:20.588 { 00:06:20.588 "nbd_device": "/dev/nbd1", 00:06:20.588 "bdev_name": "Malloc1" 00:06:20.588 } 00:06:20.588 ]' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.588 { 00:06:20.588 "nbd_device": "/dev/nbd0", 00:06:20.588 "bdev_name": "Malloc0" 00:06:20.588 }, 00:06:20.588 { 00:06:20.588 "nbd_device": "/dev/nbd1", 00:06:20.588 "bdev_name": "Malloc1" 00:06:20.588 } 00:06:20.588 ]' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.588 /dev/nbd1' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.588 /dev/nbd1' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.588 256+0 records in 00:06:20.588 256+0 records out 00:06:20.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103834 s, 101 MB/s 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.588 256+0 records in 00:06:20.588 256+0 records out 00:06:20.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313359 s, 33.5 MB/s 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.588 256+0 records in 00:06:20.588 256+0 records out 00:06:20.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287951 s, 36.4 MB/s 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.588 03:17:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.847 03:17:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.414 03:17:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.415 03:17:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.673 03:17:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.673 03:17:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.933 03:17:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.870 [2024-11-05 03:17:36.469054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.129 [2024-11-05 03:17:36.573578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.129 [2024-11-05 03:17:36.573588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.129 [2024-11-05 03:17:36.742885] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.129 [2024-11-05 03:17:36.742976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.034 spdk_app_start Round 1 00:06:25.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.034 03:17:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.034 03:17:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.034 03:17:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:06:25.034 03:17:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:06:25.034 03:17:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.034 03:17:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.034 03:17:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.034 03:17:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.034 03:17:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.292 03:17:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.292 03:17:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:25.292 03:17:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.551 Malloc0 00:06:25.551 03:17:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.810 Malloc1 00:06:26.069 03:17:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.069 03:17:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.069 /dev/nbd0 00:06:26.327 03:17:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.327 03:17:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.327 1+0 records in 00:06:26.327 1+0 records out 00:06:26.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419872 s, 9.8 MB/s 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:26.327 03:17:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:26.327 03:17:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.327 03:17:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.327 03:17:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.600 /dev/nbd1 00:06:26.600 03:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.600 03:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.600 1+0 records in 00:06:26.600 1+0 records out 00:06:26.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417655 s, 9.8 MB/s 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:26.600 03:17:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:26.600 03:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.600 03:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.600 03:17:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.600 03:17:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.600 03:17:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.870 { 00:06:26.870 "nbd_device": "/dev/nbd0", 00:06:26.870 "bdev_name": "Malloc0" 00:06:26.870 }, 00:06:26.870 { 00:06:26.870 "nbd_device": "/dev/nbd1", 00:06:26.870 "bdev_name": "Malloc1" 00:06:26.870 } 00:06:26.870 ]' 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.870 { 00:06:26.870 "nbd_device": "/dev/nbd0", 00:06:26.870 "bdev_name": "Malloc0" 00:06:26.870 }, 00:06:26.870 { 00:06:26.870 "nbd_device": "/dev/nbd1", 00:06:26.870 "bdev_name": "Malloc1" 00:06:26.870 } 00:06:26.870 ]' 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.870 /dev/nbd1' 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.870 /dev/nbd1' 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.870 03:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.871 256+0 records in 00:06:26.871 256+0 records out 00:06:26.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00731357 s, 143 MB/s 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.871 256+0 records in 00:06:26.871 256+0 records out 00:06:26.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269914 s, 38.8 MB/s 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.871 03:17:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.130 256+0 records in 00:06:27.130 256+0 records out 00:06:27.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029725 s, 35.3 MB/s 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.130 03:17:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.389 03:17:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.648 03:17:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.907 03:17:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.907 03:17:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.474 03:17:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.410 [2024-11-05 03:17:42.835853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.410 [2024-11-05 03:17:42.941590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.410 [2024-11-05 03:17:42.941593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.668 [2024-11-05 03:17:43.113825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.668 [2024-11-05 03:17:43.113934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.571 03:17:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.571 spdk_app_start Round 2 00:06:31.571 03:17:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.571 03:17:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:06:31.571 03:17:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:06:31.571 03:17:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.571 03:17:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.571 03:17:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.571 03:17:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.571 03:17:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.571 03:17:45 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.571 03:17:45 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:31.571 03:17:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.830 Malloc0 00:06:32.090 03:17:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.349 Malloc1 00:06:32.349 03:17:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.349 03:17:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.607 /dev/nbd0 00:06:32.607 03:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.607 03:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.607 1+0 records in 00:06:32.607 1+0 records out 00:06:32.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290977 s, 14.1 MB/s 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.607 03:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:32.608 03:17:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.608 03:17:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:32.608 03:17:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:32.608 03:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.608 03:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.608 03:17:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.866 /dev/nbd1 00:06:32.866 03:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.866 03:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.866 1+0 records in 00:06:32.866 1+0 records out 00:06:32.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302283 s, 13.6 MB/s 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:32.866 03:17:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.867 03:17:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:32.867 03:17:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:32.867 03:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.867 03:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.867 03:17:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.867 03:17:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.867 03:17:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.434 { 00:06:33.434 "nbd_device": "/dev/nbd0", 00:06:33.434 "bdev_name": "Malloc0" 00:06:33.434 }, 00:06:33.434 { 00:06:33.434 "nbd_device": "/dev/nbd1", 00:06:33.434 "bdev_name": "Malloc1" 00:06:33.434 } 00:06:33.434 ]' 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.434 { 00:06:33.434 "nbd_device": "/dev/nbd0", 00:06:33.434 "bdev_name": "Malloc0" 00:06:33.434 }, 00:06:33.434 { 00:06:33.434 "nbd_device": "/dev/nbd1", 00:06:33.434 "bdev_name": "Malloc1" 00:06:33.434 } 00:06:33.434 ]' 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.434 /dev/nbd1' 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.434 /dev/nbd1' 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.434 256+0 records in 00:06:33.434 256+0 records out 00:06:33.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721989 s, 145 MB/s 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.434 256+0 records in 00:06:33.434 256+0 records out 00:06:33.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277478 s, 37.8 MB/s 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.434 256+0 records in 00:06:33.434 256+0 records out 00:06:33.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0375426 s, 27.9 MB/s 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.434 03:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.435 03:17:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.694 03:17:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.268 03:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.534 03:17:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.534 03:17:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.794 03:17:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.730 [2024-11-05 03:17:49.343722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.990 [2024-11-05 03:17:49.440959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.990 [2024-11-05 03:17:49.440974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.990 [2024-11-05 03:17:49.615828] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.990 [2024-11-05 03:17:49.616234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.893 03:17:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:06:37.893 03:17:51 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:06:37.893 03:17:51 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.893 03:17:51 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.893 03:17:51 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.893 03:17:51 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.893 03:17:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:38.154 03:17:51 event.app_repeat -- event/event.sh@39 -- # killprocess 58135 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58135 ']' 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58135 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58135 00:06:38.154 killing process with pid 58135 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58135' 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58135 00:06:38.154 03:17:51 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58135 00:06:39.092 spdk_app_start is called in Round 0. 00:06:39.092 Shutdown signal received, stop current app iteration 00:06:39.092 Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 reinitialization... 00:06:39.092 spdk_app_start is called in Round 1. 00:06:39.092 Shutdown signal received, stop current app iteration 00:06:39.092 Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 reinitialization... 00:06:39.092 spdk_app_start is called in Round 2. 00:06:39.092 Shutdown signal received, stop current app iteration 00:06:39.092 Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 reinitialization... 00:06:39.092 spdk_app_start is called in Round 3. 00:06:39.092 Shutdown signal received, stop current app iteration 00:06:39.092 03:17:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:39.092 03:17:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:39.092 00:06:39.092 real 0m21.123s 00:06:39.092 user 0m46.888s 00:06:39.092 sys 0m2.994s 00:06:39.092 03:17:52 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.092 ************************************ 00:06:39.092 END TEST app_repeat 00:06:39.092 ************************************ 00:06:39.092 03:17:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.092 03:17:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:39.092 03:17:52 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.092 03:17:52 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.092 03:17:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.092 03:17:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.092 ************************************ 00:06:39.092 START TEST cpu_locks 00:06:39.092 ************************************ 00:06:39.092 03:17:52 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.352 * Looking for test storage... 00:06:39.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.352 03:17:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.352 --rc genhtml_branch_coverage=1 00:06:39.352 --rc genhtml_function_coverage=1 00:06:39.352 --rc genhtml_legend=1 00:06:39.352 --rc geninfo_all_blocks=1 00:06:39.352 --rc geninfo_unexecuted_blocks=1 00:06:39.352 00:06:39.352 ' 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.352 --rc genhtml_branch_coverage=1 00:06:39.352 --rc genhtml_function_coverage=1 00:06:39.352 --rc genhtml_legend=1 00:06:39.352 --rc geninfo_all_blocks=1 00:06:39.352 --rc geninfo_unexecuted_blocks=1 00:06:39.352 00:06:39.352 ' 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.352 --rc genhtml_branch_coverage=1 00:06:39.352 --rc genhtml_function_coverage=1 00:06:39.352 --rc genhtml_legend=1 00:06:39.352 --rc geninfo_all_blocks=1 00:06:39.352 --rc geninfo_unexecuted_blocks=1 00:06:39.352 00:06:39.352 ' 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.352 --rc genhtml_branch_coverage=1 00:06:39.352 --rc genhtml_function_coverage=1 00:06:39.352 --rc genhtml_legend=1 00:06:39.352 --rc geninfo_all_blocks=1 00:06:39.352 --rc geninfo_unexecuted_blocks=1 00:06:39.352 00:06:39.352 ' 00:06:39.352 03:17:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:39.352 03:17:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:39.352 03:17:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:39.352 03:17:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.352 03:17:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.352 ************************************ 00:06:39.352 START TEST default_locks 00:06:39.352 ************************************ 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58604 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58604 00:06:39.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58604 ']' 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.352 03:17:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.352 [2024-11-05 03:17:52.969406] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:39.352 [2024-11-05 03:17:52.969543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58604 ] 00:06:39.612 [2024-11-05 03:17:53.143138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.871 [2024-11-05 03:17:53.264731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.439 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.439 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:40.439 03:17:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58604 00:06:40.439 03:17:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58604 00:06:40.439 03:17:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58604 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58604 ']' 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58604 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58604 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.007 killing process with pid 58604 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58604' 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58604 00:06:41.007 03:17:54 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58604 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58604 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58604 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58604 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58604 ']' 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.913 ERROR: process (pid: 58604) is no longer running 00:06:42.913 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58604) - No such process 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.913 00:06:42.913 real 0m3.643s 00:06:42.913 user 0m3.691s 00:06:42.913 sys 0m0.719s 00:06:42.913 ************************************ 00:06:42.913 END TEST default_locks 00:06:42.913 ************************************ 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.913 03:17:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.173 03:17:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:43.173 03:17:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.173 03:17:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.173 03:17:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.173 ************************************ 00:06:43.173 START TEST default_locks_via_rpc 00:06:43.173 ************************************ 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58674 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58674 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58674 ']' 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.173 03:17:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.173 [2024-11-05 03:17:56.697534] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:43.173 [2024-11-05 03:17:56.697751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58674 ] 00:06:43.432 [2024-11-05 03:17:56.882431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.432 [2024-11-05 03:17:57.000256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.369 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58674 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.370 03:17:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58674 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58674 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58674 ']' 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58674 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58674 00:06:44.628 killing process with pid 58674 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58674' 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58674 00:06:44.628 03:17:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58674 00:06:47.165 00:06:47.165 real 0m3.709s 00:06:47.165 user 0m3.741s 00:06:47.165 sys 0m0.709s 00:06:47.165 03:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.165 03:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.165 ************************************ 00:06:47.165 END TEST default_locks_via_rpc 00:06:47.165 ************************************ 00:06:47.165 03:18:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.165 03:18:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.165 03:18:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.165 03:18:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.165 ************************************ 00:06:47.165 START TEST non_locking_app_on_locked_coremask 00:06:47.165 ************************************ 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58748 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58748 /var/tmp/spdk.sock 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58748 ']' 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.165 03:18:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.165 [2024-11-05 03:18:00.457335] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:47.165 [2024-11-05 03:18:00.457522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58748 ] 00:06:47.165 [2024-11-05 03:18:00.633618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.165 [2024-11-05 03:18:00.752859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58764 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58764 /var/tmp/spdk2.sock 00:06:48.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58764 ']' 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.103 03:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.103 [2024-11-05 03:18:01.710846] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:48.103 [2024-11-05 03:18:01.711548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58764 ] 00:06:48.362 [2024-11-05 03:18:01.910584] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.362 [2024-11-05 03:18:01.910685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.620 [2024-11-05 03:18:02.158474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.155 03:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.155 03:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:51.155 03:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58748 00:06:51.155 03:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58748 00:06:51.155 03:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58748 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58748 ']' 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58748 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58748 00:06:51.722 killing process with pid 58748 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58748' 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58748 00:06:51.722 03:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58748 00:06:55.912 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58764 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58764 ']' 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58764 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58764 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58764' 00:06:55.913 killing process with pid 58764 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58764 00:06:55.913 03:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58764 00:06:57.827 00:06:57.827 real 0m11.013s 00:06:57.827 user 0m11.548s 00:06:57.827 sys 0m1.548s 00:06:57.827 03:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.827 03:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.827 ************************************ 00:06:57.827 END TEST non_locking_app_on_locked_coremask 00:06:57.827 ************************************ 00:06:57.827 03:18:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.827 03:18:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.827 03:18:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.827 03:18:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.827 ************************************ 00:06:57.827 START TEST locking_app_on_unlocked_coremask 00:06:57.827 ************************************ 00:06:57.827 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:57.827 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58907 00:06:57.827 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.827 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58907 /var/tmp/spdk.sock 00:06:57.827 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58907 ']' 00:06:57.827 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.827 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.828 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.828 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.828 03:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.087 [2024-11-05 03:18:11.538010] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:58.087 [2024-11-05 03:18:11.539423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58907 ] 00:06:58.346 [2024-11-05 03:18:11.733215] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.346 [2024-11-05 03:18:11.733329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.346 [2024-11-05 03:18:11.892832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58930 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58930 /var/tmp/spdk2.sock 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58930 ']' 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.283 03:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.283 [2024-11-05 03:18:12.838550] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:59.283 [2024-11-05 03:18:12.838802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:06:59.542 [2024-11-05 03:18:13.034796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.800 [2024-11-05 03:18:13.280137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.333 03:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.333 03:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:02.333 03:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58930 00:07:02.333 03:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58930 00:07:02.333 03:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58907 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58907 ']' 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58907 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58907 00:07:02.900 killing process with pid 58907 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58907' 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58907 00:07:02.900 03:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58907 00:07:07.090 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58930 00:07:07.090 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58930 ']' 00:07:07.090 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58930 00:07:07.090 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:07.090 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:07.090 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58930 00:07:07.349 killing process with pid 58930 00:07:07.349 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:07.349 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:07.349 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58930' 00:07:07.349 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58930 00:07:07.349 03:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58930 00:07:09.254 00:07:09.254 real 0m11.441s 00:07:09.254 user 0m11.982s 00:07:09.254 sys 0m1.518s 00:07:09.254 03:18:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.254 ************************************ 00:07:09.254 END TEST locking_app_on_unlocked_coremask 00:07:09.254 ************************************ 00:07:09.254 03:18:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.254 03:18:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.254 03:18:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.254 03:18:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.254 03:18:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.513 ************************************ 00:07:09.513 START TEST locking_app_on_locked_coremask 00:07:09.513 ************************************ 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59076 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59076 /var/tmp/spdk.sock 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59076 ']' 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.513 03:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.513 [2024-11-05 03:18:23.011175] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:09.513 [2024-11-05 03:18:23.011532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59076 ] 00:07:09.772 [2024-11-05 03:18:23.182759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.772 [2024-11-05 03:18:23.310561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59097 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59097 /var/tmp/spdk2.sock 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59097 /var/tmp/spdk2.sock 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59097 /var/tmp/spdk2.sock 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59097 ']' 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.739 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.739 [2024-11-05 03:18:24.275937] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:10.739 [2024-11-05 03:18:24.276445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59097 ] 00:07:10.998 [2024-11-05 03:18:24.466104] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59076 has claimed it. 00:07:10.998 [2024-11-05 03:18:24.466213] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.567 ERROR: process (pid: 59097) is no longer running 00:07:11.567 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59097) - No such process 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59076 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59076 00:07:11.567 03:18:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.826 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59076 00:07:11.826 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59076 ']' 00:07:11.826 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59076 00:07:11.826 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:11.826 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:11.826 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59076 00:07:12.085 killing process with pid 59076 00:07:12.085 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.085 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.085 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59076' 00:07:12.085 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59076 00:07:12.085 03:18:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59076 00:07:13.991 00:07:13.991 real 0m4.597s 00:07:13.991 user 0m4.909s 00:07:13.991 sys 0m0.868s 00:07:13.991 03:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.991 ************************************ 00:07:13.991 03:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.991 END TEST locking_app_on_locked_coremask 00:07:13.991 ************************************ 00:07:13.991 03:18:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.991 03:18:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.991 03:18:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.991 03:18:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.991 ************************************ 00:07:13.991 START TEST locking_overlapped_coremask 00:07:13.991 ************************************ 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59163 00:07:13.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59163 /var/tmp/spdk.sock 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59163 ']' 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.991 03:18:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.250 [2024-11-05 03:18:27.682154] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:14.250 [2024-11-05 03:18:27.682348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59163 ] 00:07:14.250 [2024-11-05 03:18:27.868426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.509 [2024-11-05 03:18:27.982103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.509 [2024-11-05 03:18:27.982210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.509 [2024-11-05 03:18:27.982228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59181 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59181 /var/tmp/spdk2.sock 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59181 /var/tmp/spdk2.sock 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:15.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59181 /var/tmp/spdk2.sock 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59181 ']' 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.448 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.449 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.449 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.449 03:18:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.449 [2024-11-05 03:18:28.940577] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:15.449 [2024-11-05 03:18:28.940754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59181 ] 00:07:15.708 [2024-11-05 03:18:29.141179] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59163 has claimed it. 00:07:15.708 [2024-11-05 03:18:29.141246] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.966 ERROR: process (pid: 59181) is no longer running 00:07:15.966 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59181) - No such process 00:07:15.966 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.966 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:15.966 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:15.966 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.966 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.966 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.966 03:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59163 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59163 ']' 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59163 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59163 00:07:15.967 killing process with pid 59163 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59163' 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59163 00:07:15.967 03:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59163 00:07:18.511 ************************************ 00:07:18.511 END TEST locking_overlapped_coremask 00:07:18.511 ************************************ 00:07:18.511 00:07:18.511 real 0m4.122s 00:07:18.511 user 0m11.201s 00:07:18.511 sys 0m0.699s 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.511 03:18:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:18.511 03:18:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.511 03:18:31 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.511 03:18:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.511 ************************************ 00:07:18.511 START TEST locking_overlapped_coremask_via_rpc 00:07:18.511 ************************************ 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59245 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59245 /var/tmp/spdk.sock 00:07:18.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59245 ']' 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.511 03:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.511 [2024-11-05 03:18:31.854550] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:18.511 [2024-11-05 03:18:31.854750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59245 ] 00:07:18.511 [2024-11-05 03:18:32.035304] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.511 [2024-11-05 03:18:32.035377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.770 [2024-11-05 03:18:32.155238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.770 [2024-11-05 03:18:32.155378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.770 [2024-11-05 03:18:32.155395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59263 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59263 /var/tmp/spdk2.sock 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59263 ']' 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:19.708 03:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.708 [2024-11-05 03:18:33.107883] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:19.708 [2024-11-05 03:18:33.108012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59263 ] 00:07:19.708 [2024-11-05 03:18:33.304537] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.708 [2024-11-05 03:18:33.304606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.967 [2024-11-05 03:18:33.563166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.967 [2024-11-05 03:18:33.566490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.967 [2024-11-05 03:18:33.566509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.504 [2024-11-05 03:18:35.790514] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59245 has claimed it. 00:07:22.504 request: 00:07:22.504 { 00:07:22.504 "method": "framework_enable_cpumask_locks", 00:07:22.504 "req_id": 1 00:07:22.504 } 00:07:22.504 Got JSON-RPC error response 00:07:22.504 response: 00:07:22.504 { 00:07:22.504 "code": -32603, 00:07:22.504 "message": "Failed to claim CPU core: 2" 00:07:22.504 } 00:07:22.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59245 /var/tmp/spdk.sock 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59245 ']' 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.504 03:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59263 /var/tmp/spdk2.sock 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59263 ']' 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.504 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.764 00:07:22.764 real 0m4.653s 00:07:22.764 user 0m1.689s 00:07:22.764 sys 0m0.247s 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.764 03:18:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.764 ************************************ 00:07:22.764 END TEST locking_overlapped_coremask_via_rpc 00:07:22.764 ************************************ 00:07:23.023 03:18:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:23.023 03:18:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59245 ]] 00:07:23.023 03:18:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59245 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59245 ']' 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59245 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59245 00:07:23.023 killing process with pid 59245 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59245' 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59245 00:07:23.023 03:18:36 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59245 00:07:25.557 03:18:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59263 ]] 00:07:25.557 03:18:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59263 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59263 ']' 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59263 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59263 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:25.557 killing process with pid 59263 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59263' 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59263 00:07:25.557 03:18:38 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59263 00:07:27.462 03:18:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.462 Process with pid 59245 is not found 00:07:27.462 03:18:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:27.462 03:18:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59245 ]] 00:07:27.462 03:18:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59245 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59245 ']' 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59245 00:07:27.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59245) - No such process 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59245 is not found' 00:07:27.462 03:18:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59263 ]] 00:07:27.462 03:18:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59263 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59263 ']' 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59263 00:07:27.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59263) - No such process 00:07:27.462 Process with pid 59263 is not found 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59263 is not found' 00:07:27.462 03:18:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.462 00:07:27.462 real 0m48.131s 00:07:27.462 user 1m23.652s 00:07:27.462 sys 0m7.542s 00:07:27.462 ************************************ 00:07:27.462 END TEST cpu_locks 00:07:27.462 ************************************ 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.462 03:18:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.463 00:07:27.463 real 1m19.560s 00:07:27.463 user 2m26.423s 00:07:27.463 sys 0m11.626s 00:07:27.463 03:18:40 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.463 ************************************ 00:07:27.463 END TEST event 00:07:27.463 ************************************ 00:07:27.463 03:18:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.463 03:18:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:27.463 03:18:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:27.463 03:18:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.463 03:18:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.463 ************************************ 00:07:27.463 START TEST thread 00:07:27.463 ************************************ 00:07:27.463 03:18:40 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:27.463 * Looking for test storage... 00:07:27.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:27.463 03:18:40 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:27.463 03:18:40 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:27.463 03:18:40 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:27.463 03:18:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.463 03:18:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.463 03:18:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.463 03:18:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.463 03:18:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.463 03:18:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.463 03:18:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.463 03:18:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.463 03:18:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.463 03:18:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.463 03:18:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.463 03:18:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:27.463 03:18:41 thread -- scripts/common.sh@345 -- # : 1 00:07:27.463 03:18:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.463 03:18:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.463 03:18:41 thread -- scripts/common.sh@365 -- # decimal 1 00:07:27.463 03:18:41 thread -- scripts/common.sh@353 -- # local d=1 00:07:27.463 03:18:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.463 03:18:41 thread -- scripts/common.sh@355 -- # echo 1 00:07:27.463 03:18:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.463 03:18:41 thread -- scripts/common.sh@366 -- # decimal 2 00:07:27.463 03:18:41 thread -- scripts/common.sh@353 -- # local d=2 00:07:27.463 03:18:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.463 03:18:41 thread -- scripts/common.sh@355 -- # echo 2 00:07:27.463 03:18:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.463 03:18:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.463 03:18:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.463 03:18:41 thread -- scripts/common.sh@368 -- # return 0 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:27.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.463 --rc genhtml_branch_coverage=1 00:07:27.463 --rc genhtml_function_coverage=1 00:07:27.463 --rc genhtml_legend=1 00:07:27.463 --rc geninfo_all_blocks=1 00:07:27.463 --rc geninfo_unexecuted_blocks=1 00:07:27.463 00:07:27.463 ' 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:27.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.463 --rc genhtml_branch_coverage=1 00:07:27.463 --rc genhtml_function_coverage=1 00:07:27.463 --rc genhtml_legend=1 00:07:27.463 --rc geninfo_all_blocks=1 00:07:27.463 --rc geninfo_unexecuted_blocks=1 00:07:27.463 00:07:27.463 ' 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:27.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.463 --rc genhtml_branch_coverage=1 00:07:27.463 --rc genhtml_function_coverage=1 00:07:27.463 --rc genhtml_legend=1 00:07:27.463 --rc geninfo_all_blocks=1 00:07:27.463 --rc geninfo_unexecuted_blocks=1 00:07:27.463 00:07:27.463 ' 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:27.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.463 --rc genhtml_branch_coverage=1 00:07:27.463 --rc genhtml_function_coverage=1 00:07:27.463 --rc genhtml_legend=1 00:07:27.463 --rc geninfo_all_blocks=1 00:07:27.463 --rc geninfo_unexecuted_blocks=1 00:07:27.463 00:07:27.463 ' 00:07:27.463 03:18:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.463 03:18:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.463 ************************************ 00:07:27.463 START TEST thread_poller_perf 00:07:27.463 ************************************ 00:07:27.463 03:18:41 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.722 [2024-11-05 03:18:41.127837] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:27.722 [2024-11-05 03:18:41.128180] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59458 ] 00:07:27.722 [2024-11-05 03:18:41.318492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.981 [2024-11-05 03:18:41.470053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.981 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:29.359 [2024-11-05T03:18:42.998Z] ====================================== 00:07:29.359 [2024-11-05T03:18:42.998Z] busy:2216894098 (cyc) 00:07:29.359 [2024-11-05T03:18:42.998Z] total_run_count: 333000 00:07:29.359 [2024-11-05T03:18:42.998Z] tsc_hz: 2200000000 (cyc) 00:07:29.359 [2024-11-05T03:18:42.998Z] ====================================== 00:07:29.359 [2024-11-05T03:18:42.998Z] poller_cost: 6657 (cyc), 3025 (nsec) 00:07:29.359 00:07:29.359 real 0m1.615s 00:07:29.359 user 0m1.385s 00:07:29.359 sys 0m0.120s 00:07:29.359 03:18:42 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.359 ************************************ 00:07:29.359 END TEST thread_poller_perf 00:07:29.359 ************************************ 00:07:29.359 03:18:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.359 03:18:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.359 03:18:42 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:29.359 03:18:42 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.359 03:18:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.359 ************************************ 00:07:29.359 START TEST thread_poller_perf 00:07:29.359 ************************************ 00:07:29.359 03:18:42 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.359 [2024-11-05 03:18:42.791216] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:29.359 [2024-11-05 03:18:42.791440] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59495 ] 00:07:29.359 [2024-11-05 03:18:42.970426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.620 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:29.620 [2024-11-05 03:18:43.089324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.998 [2024-11-05T03:18:44.637Z] ====================================== 00:07:30.998 [2024-11-05T03:18:44.637Z] busy:2203658452 (cyc) 00:07:30.998 [2024-11-05T03:18:44.637Z] total_run_count: 4348000 00:07:30.998 [2024-11-05T03:18:44.637Z] tsc_hz: 2200000000 (cyc) 00:07:30.998 [2024-11-05T03:18:44.637Z] ====================================== 00:07:30.998 [2024-11-05T03:18:44.637Z] poller_cost: 506 (cyc), 230 (nsec) 00:07:30.998 00:07:30.998 real 0m1.550s 00:07:30.998 user 0m1.329s 00:07:30.998 sys 0m0.112s 00:07:30.998 03:18:44 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.998 ************************************ 00:07:30.998 END TEST thread_poller_perf 00:07:30.998 ************************************ 00:07:30.998 03:18:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.998 03:18:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:30.998 00:07:30.998 real 0m3.463s 00:07:30.998 user 0m2.866s 00:07:30.998 sys 0m0.376s 00:07:30.998 03:18:44 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.998 03:18:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.998 ************************************ 00:07:30.998 END TEST thread 00:07:30.998 ************************************ 00:07:30.998 03:18:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:30.998 03:18:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:30.998 03:18:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.998 03:18:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.998 03:18:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.998 ************************************ 00:07:30.998 START TEST app_cmdline 00:07:30.998 ************************************ 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:30.998 * Looking for test storage... 00:07:30.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.998 03:18:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:30.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.998 --rc genhtml_branch_coverage=1 00:07:30.998 --rc genhtml_function_coverage=1 00:07:30.998 --rc genhtml_legend=1 00:07:30.998 --rc geninfo_all_blocks=1 00:07:30.998 --rc geninfo_unexecuted_blocks=1 00:07:30.998 00:07:30.998 ' 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:30.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.998 --rc genhtml_branch_coverage=1 00:07:30.998 --rc genhtml_function_coverage=1 00:07:30.998 --rc genhtml_legend=1 00:07:30.998 --rc geninfo_all_blocks=1 00:07:30.998 --rc geninfo_unexecuted_blocks=1 00:07:30.998 00:07:30.998 ' 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:30.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.998 --rc genhtml_branch_coverage=1 00:07:30.998 --rc genhtml_function_coverage=1 00:07:30.998 --rc genhtml_legend=1 00:07:30.998 --rc geninfo_all_blocks=1 00:07:30.998 --rc geninfo_unexecuted_blocks=1 00:07:30.998 00:07:30.998 ' 00:07:30.998 03:18:44 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:30.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.998 --rc genhtml_branch_coverage=1 00:07:30.998 --rc genhtml_function_coverage=1 00:07:30.998 --rc genhtml_legend=1 00:07:30.998 --rc geninfo_all_blocks=1 00:07:30.999 --rc geninfo_unexecuted_blocks=1 00:07:30.999 00:07:30.999 ' 00:07:30.999 03:18:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:30.999 03:18:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:30.999 03:18:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59584 00:07:30.999 03:18:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59584 00:07:30.999 03:18:44 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59584 ']' 00:07:30.999 03:18:44 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.999 03:18:44 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.999 03:18:44 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.999 03:18:44 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.999 03:18:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.257 [2024-11-05 03:18:44.705322] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:31.257 [2024-11-05 03:18:44.705985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59584 ] 00:07:31.257 [2024-11-05 03:18:44.891078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.516 [2024-11-05 03:18:45.008231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.451 03:18:45 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.451 03:18:45 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:32.451 03:18:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:32.451 { 00:07:32.451 "version": "SPDK v25.01-pre git sha1 d0fd7ad59", 00:07:32.451 "fields": { 00:07:32.451 "major": 25, 00:07:32.451 "minor": 1, 00:07:32.451 "patch": 0, 00:07:32.451 "suffix": "-pre", 00:07:32.451 "commit": "d0fd7ad59" 00:07:32.451 } 00:07:32.451 } 00:07:32.451 03:18:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:32.451 03:18:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:32.451 03:18:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:32.451 03:18:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:32.451 03:18:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:32.451 03:18:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:32.451 03:18:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:32.451 03:18:46 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.451 03:18:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.451 03:18:46 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.710 03:18:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:32.710 03:18:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:32.710 03:18:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:32.710 03:18:46 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.969 request: 00:07:32.969 { 00:07:32.969 "method": "env_dpdk_get_mem_stats", 00:07:32.969 "req_id": 1 00:07:32.969 } 00:07:32.969 Got JSON-RPC error response 00:07:32.969 response: 00:07:32.969 { 00:07:32.969 "code": -32601, 00:07:32.969 "message": "Method not found" 00:07:32.969 } 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.969 03:18:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59584 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59584 ']' 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59584 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59584 00:07:32.969 killing process with pid 59584 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59584' 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@971 -- # kill 59584 00:07:32.969 03:18:46 app_cmdline -- common/autotest_common.sh@976 -- # wait 59584 00:07:34.873 00:07:34.873 real 0m4.009s 00:07:34.873 user 0m4.500s 00:07:34.873 sys 0m0.635s 00:07:34.873 ************************************ 00:07:34.873 END TEST app_cmdline 00:07:34.873 ************************************ 00:07:34.873 03:18:48 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.873 03:18:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.873 03:18:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:34.873 03:18:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.873 03:18:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.873 03:18:48 -- common/autotest_common.sh@10 -- # set +x 00:07:34.873 ************************************ 00:07:34.873 START TEST version 00:07:34.873 ************************************ 00:07:34.873 03:18:48 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.132 * Looking for test storage... 00:07:35.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:35.132 03:18:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.132 03:18:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.132 03:18:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.132 03:18:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.132 03:18:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.132 03:18:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.132 03:18:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.132 03:18:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.132 03:18:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.132 03:18:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.132 03:18:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.132 03:18:48 version -- scripts/common.sh@344 -- # case "$op" in 00:07:35.132 03:18:48 version -- scripts/common.sh@345 -- # : 1 00:07:35.132 03:18:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.132 03:18:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.132 03:18:48 version -- scripts/common.sh@365 -- # decimal 1 00:07:35.132 03:18:48 version -- scripts/common.sh@353 -- # local d=1 00:07:35.132 03:18:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.132 03:18:48 version -- scripts/common.sh@355 -- # echo 1 00:07:35.132 03:18:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.132 03:18:48 version -- scripts/common.sh@366 -- # decimal 2 00:07:35.132 03:18:48 version -- scripts/common.sh@353 -- # local d=2 00:07:35.132 03:18:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.132 03:18:48 version -- scripts/common.sh@355 -- # echo 2 00:07:35.132 03:18:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.132 03:18:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.132 03:18:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.132 03:18:48 version -- scripts/common.sh@368 -- # return 0 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.132 --rc genhtml_branch_coverage=1 00:07:35.132 --rc genhtml_function_coverage=1 00:07:35.132 --rc genhtml_legend=1 00:07:35.132 --rc geninfo_all_blocks=1 00:07:35.132 --rc geninfo_unexecuted_blocks=1 00:07:35.132 00:07:35.132 ' 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.132 --rc genhtml_branch_coverage=1 00:07:35.132 --rc genhtml_function_coverage=1 00:07:35.132 --rc genhtml_legend=1 00:07:35.132 --rc geninfo_all_blocks=1 00:07:35.132 --rc geninfo_unexecuted_blocks=1 00:07:35.132 00:07:35.132 ' 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.132 --rc genhtml_branch_coverage=1 00:07:35.132 --rc genhtml_function_coverage=1 00:07:35.132 --rc genhtml_legend=1 00:07:35.132 --rc geninfo_all_blocks=1 00:07:35.132 --rc geninfo_unexecuted_blocks=1 00:07:35.132 00:07:35.132 ' 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:35.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.132 --rc genhtml_branch_coverage=1 00:07:35.132 --rc genhtml_function_coverage=1 00:07:35.132 --rc genhtml_legend=1 00:07:35.132 --rc geninfo_all_blocks=1 00:07:35.132 --rc geninfo_unexecuted_blocks=1 00:07:35.132 00:07:35.132 ' 00:07:35.132 03:18:48 version -- app/version.sh@17 -- # get_header_version major 00:07:35.132 03:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # cut -f2 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.132 03:18:48 version -- app/version.sh@17 -- # major=25 00:07:35.132 03:18:48 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.132 03:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # cut -f2 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.132 03:18:48 version -- app/version.sh@18 -- # minor=1 00:07:35.132 03:18:48 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.132 03:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # cut -f2 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.132 03:18:48 version -- app/version.sh@19 -- # patch=0 00:07:35.132 03:18:48 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.132 03:18:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # cut -f2 00:07:35.132 03:18:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.132 03:18:48 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.132 03:18:48 version -- app/version.sh@22 -- # version=25.1 00:07:35.132 03:18:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.132 03:18:48 version -- app/version.sh@28 -- # version=25.1rc0 00:07:35.132 03:18:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:35.132 03:18:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.132 03:18:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:35.132 03:18:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:35.132 00:07:35.132 real 0m0.254s 00:07:35.132 user 0m0.170s 00:07:35.132 sys 0m0.118s 00:07:35.132 ************************************ 00:07:35.132 END TEST version 00:07:35.132 ************************************ 00:07:35.132 03:18:48 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.132 03:18:48 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.132 03:18:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:35.132 03:18:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:35.132 03:18:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:35.132 03:18:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.132 03:18:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.132 03:18:48 -- common/autotest_common.sh@10 -- # set +x 00:07:35.132 ************************************ 00:07:35.132 START TEST bdev_raid 00:07:35.132 ************************************ 00:07:35.132 03:18:48 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:35.390 * Looking for test storage... 00:07:35.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:35.390 03:18:48 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:35.390 03:18:48 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:07:35.390 03:18:48 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:35.390 03:18:48 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.390 03:18:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.391 03:18:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.391 --rc genhtml_branch_coverage=1 00:07:35.391 --rc genhtml_function_coverage=1 00:07:35.391 --rc genhtml_legend=1 00:07:35.391 --rc geninfo_all_blocks=1 00:07:35.391 --rc geninfo_unexecuted_blocks=1 00:07:35.391 00:07:35.391 ' 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.391 --rc genhtml_branch_coverage=1 00:07:35.391 --rc genhtml_function_coverage=1 00:07:35.391 --rc genhtml_legend=1 00:07:35.391 --rc geninfo_all_blocks=1 00:07:35.391 --rc geninfo_unexecuted_blocks=1 00:07:35.391 00:07:35.391 ' 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.391 --rc genhtml_branch_coverage=1 00:07:35.391 --rc genhtml_function_coverage=1 00:07:35.391 --rc genhtml_legend=1 00:07:35.391 --rc geninfo_all_blocks=1 00:07:35.391 --rc geninfo_unexecuted_blocks=1 00:07:35.391 00:07:35.391 ' 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.391 --rc genhtml_branch_coverage=1 00:07:35.391 --rc genhtml_function_coverage=1 00:07:35.391 --rc genhtml_legend=1 00:07:35.391 --rc geninfo_all_blocks=1 00:07:35.391 --rc geninfo_unexecuted_blocks=1 00:07:35.391 00:07:35.391 ' 00:07:35.391 03:18:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:35.391 03:18:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:35.391 03:18:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:35.391 03:18:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:35.391 03:18:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:35.391 03:18:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:35.391 03:18:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.391 03:18:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.391 ************************************ 00:07:35.391 START TEST raid1_resize_data_offset_test 00:07:35.391 ************************************ 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59766 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59766' 00:07:35.391 Process raid pid: 59766 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59766 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 59766 ']' 00:07:35.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.391 03:18:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.649 [2024-11-05 03:18:49.072691] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:35.649 [2024-11-05 03:18:49.072891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.649 [2024-11-05 03:18:49.256760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.909 [2024-11-05 03:18:49.362571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.191 [2024-11-05 03:18:49.550150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.191 [2024-11-05 03:18:49.550191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.449 03:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.449 03:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:07:36.449 03:18:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:36.449 03:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.449 03:18:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.449 malloc0 00:07:36.449 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.449 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:36.449 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.449 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 malloc1 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 null0 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 [2024-11-05 03:18:50.164829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:36.706 [2024-11-05 03:18:50.167196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:36.706 [2024-11-05 03:18:50.167258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:36.706 [2024-11-05 03:18:50.167460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:36.706 [2024-11-05 03:18:50.167480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:36.706 [2024-11-05 03:18:50.167791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:36.706 [2024-11-05 03:18:50.167975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:36.706 [2024-11-05 03:18:50.167994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:36.706 [2024-11-05 03:18:50.168145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:36.706 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:36.707 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.707 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.707 [2024-11-05 03:18:50.224845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:36.707 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.707 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:36.707 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.707 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.273 malloc2 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.273 [2024-11-05 03:18:50.733182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:37.273 [2024-11-05 03:18:50.749631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.273 [2024-11-05 03:18:50.752083] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59766 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 59766 ']' 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 59766 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59766 00:07:37.273 killing process with pid 59766 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59766' 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 59766 00:07:37.273 03:18:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 59766 00:07:37.273 [2024-11-05 03:18:50.837941] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.273 [2024-11-05 03:18:50.839653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:37.273 [2024-11-05 03:18:50.839795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.273 [2024-11-05 03:18:50.839821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:37.273 [2024-11-05 03:18:50.869039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.273 [2024-11-05 03:18:50.869523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.273 [2024-11-05 03:18:50.869560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:39.177 [2024-11-05 03:18:52.328765] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.745 03:18:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:39.745 00:07:39.745 real 0m4.289s 00:07:39.745 user 0m4.199s 00:07:39.745 sys 0m0.624s 00:07:39.745 03:18:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.745 03:18:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.745 ************************************ 00:07:39.745 END TEST raid1_resize_data_offset_test 00:07:39.745 ************************************ 00:07:39.745 03:18:53 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:39.745 03:18:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:39.745 03:18:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.745 03:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.745 ************************************ 00:07:39.745 START TEST raid0_resize_superblock_test 00:07:39.745 ************************************ 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59844 00:07:39.745 Process raid pid: 59844 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59844' 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59844 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59844 ']' 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:39.745 03:18:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.005 [2024-11-05 03:18:53.396986] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:40.005 [2024-11-05 03:18:53.397153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.005 [2024-11-05 03:18:53.571704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.264 [2024-11-05 03:18:53.682303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.264 [2024-11-05 03:18:53.861913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.264 [2024-11-05 03:18:53.861975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.832 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.832 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:40.832 03:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:40.832 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.832 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.401 malloc0 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.401 [2024-11-05 03:18:54.860453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:41.401 [2024-11-05 03:18:54.860557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.401 [2024-11-05 03:18:54.860589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:41.401 [2024-11-05 03:18:54.860607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.401 [2024-11-05 03:18:54.863392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.401 [2024-11-05 03:18:54.863454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:41.401 pt0 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.401 49ff657b-f4b4-42f6-a6be-dd760f23e5a5 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.401 a616981f-ef37-4644-9d28-ccec07720b8b 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.401 03:18:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.401 b9ae9426-7466-4d60-bd09-6cae2cec0293 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.401 [2024-11-05 03:18:55.012547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a616981f-ef37-4644-9d28-ccec07720b8b is claimed 00:07:41.401 [2024-11-05 03:18:55.012685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b9ae9426-7466-4d60-bd09-6cae2cec0293 is claimed 00:07:41.401 [2024-11-05 03:18:55.012895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:41.401 [2024-11-05 03:18:55.012920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:41.401 [2024-11-05 03:18:55.013219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.401 [2024-11-05 03:18:55.013552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:41.401 [2024-11-05 03:18:55.013580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:41.401 [2024-11-05 03:18:55.013776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.401 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:41.660 [2024-11-05 03:18:55.128840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.660 [2024-11-05 03:18:55.180790] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:41.660 [2024-11-05 03:18:55.180834] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a616981f-ef37-4644-9d28-ccec07720b8b' was resized: old size 131072, new size 204800 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.660 [2024-11-05 03:18:55.188719] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:41.660 [2024-11-05 03:18:55.188748] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b9ae9426-7466-4d60-bd09-6cae2cec0293' was resized: old size 131072, new size 204800 00:07:41.660 [2024-11-05 03:18:55.188811] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.660 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.920 [2024-11-05 03:18:55.300911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.920 [2024-11-05 03:18:55.352616] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:41.920 [2024-11-05 03:18:55.352757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:41.920 [2024-11-05 03:18:55.352774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.920 [2024-11-05 03:18:55.352811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:41.920 [2024-11-05 03:18:55.352930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.920 [2024-11-05 03:18:55.353009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.920 [2024-11-05 03:18:55.353028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.920 [2024-11-05 03:18:55.364556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:41.920 [2024-11-05 03:18:55.364676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.920 [2024-11-05 03:18:55.364717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:41.920 [2024-11-05 03:18:55.364734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.920 [2024-11-05 03:18:55.367723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.920 [2024-11-05 03:18:55.367803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:41.920 pt0 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:41.920 [2024-11-05 03:18:55.370402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a616981f-ef37-4644-9d28-ccec07720b8b 00:07:41.920 [2024-11-05 03:18:55.370485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a616981f-ef37-4644-9d28-ccec07720b8b is claimed 00:07:41.920 [2024-11-05 03:18:55.370647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b9ae9426-7466-4d60-bd09-6cae2cec0293 00:07:41.920 [2024-11-05 03:18:55.370682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b9ae9426-7466-4d60-bd09-6cae2cec0293 is claimed 00:07:41.920 [2024-11-05 03:18:55.370860] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b9ae9426-7466-4d60-bd09-6cae2cec0293 (2) smaller than existing raid bdev Raid (3) 00:07:41.920 [2024-11-05 03:18:55.370907] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a616981f-ef37-4644-9d28-ccec07720b8b: File exists 00:07:41.920 [2024-11-05 03:18:55.370978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:41.920 [2024-11-05 03:18:55.370996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:41.920 [2024-11-05 03:18:55.371356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:41.920 [2024-11-05 03:18:55.371568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.920 [2024-11-05 03:18:55.371582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:41.920 [2024-11-05 03:18:55.371799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.920 [2024-11-05 03:18:55.388882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.920 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59844 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59844 ']' 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59844 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59844 00:07:41.921 killing process with pid 59844 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59844' 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59844 00:07:41.921 [2024-11-05 03:18:55.464625] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.921 03:18:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59844 00:07:41.921 [2024-11-05 03:18:55.464708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.921 [2024-11-05 03:18:55.464773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.921 [2024-11-05 03:18:55.464787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:43.298 [2024-11-05 03:18:56.621597] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.236 03:18:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:44.236 00:07:44.236 real 0m4.224s 00:07:44.236 user 0m4.485s 00:07:44.236 sys 0m0.637s 00:07:44.236 03:18:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.236 03:18:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.236 ************************************ 00:07:44.236 END TEST raid0_resize_superblock_test 00:07:44.236 ************************************ 00:07:44.236 03:18:57 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:44.236 03:18:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:44.236 03:18:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.236 03:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.236 ************************************ 00:07:44.236 START TEST raid1_resize_superblock_test 00:07:44.236 ************************************ 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59942 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.236 Process raid pid: 59942 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59942' 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59942 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59942 ']' 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:44.236 03:18:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.236 [2024-11-05 03:18:57.698248] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:44.236 [2024-11-05 03:18:57.698496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.495 [2024-11-05 03:18:57.877071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.495 [2024-11-05 03:18:57.981041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.754 [2024-11-05 03:18:58.184824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.754 [2024-11-05 03:18:58.184864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.013 03:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:45.013 03:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:45.013 03:18:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:45.013 03:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.013 03:18:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.580 malloc0 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.580 [2024-11-05 03:18:59.141867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:45.580 [2024-11-05 03:18:59.142007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.580 [2024-11-05 03:18:59.142154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:45.580 [2024-11-05 03:18:59.142297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.580 [2024-11-05 03:18:59.145583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.580 pt0 00:07:45.580 [2024-11-05 03:18:59.145831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.580 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 1839e579-ffea-48bc-b2c5-4a053a5f4936 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 eba6eed6-f73a-4576-9d8f-3efd2860cf8a 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 5c7ea9b1-500d-4888-9f23-0733e6bf62f1 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 [2024-11-05 03:18:59.285957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eba6eed6-f73a-4576-9d8f-3efd2860cf8a is claimed 00:07:45.840 [2024-11-05 03:18:59.286092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5c7ea9b1-500d-4888-9f23-0733e6bf62f1 is claimed 00:07:45.840 [2024-11-05 03:18:59.286270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:45.840 [2024-11-05 03:18:59.286330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:45.840 [2024-11-05 03:18:59.286738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.840 [2024-11-05 03:18:59.287074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:45.840 [2024-11-05 03:18:59.287091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:45.840 [2024-11-05 03:18:59.287333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:45.840 [2024-11-05 03:18:59.410253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:45.840 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.841 [2024-11-05 03:18:59.462174] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:45.841 [2024-11-05 03:18:59.462435] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'eba6eed6-f73a-4576-9d8f-3efd2860cf8a' was resized: old size 131072, new size 204800 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.841 [2024-11-05 03:18:59.470142] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:45.841 [2024-11-05 03:18:59.470288] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5c7ea9b1-500d-4888-9f23-0733e6bf62f1' was resized: old size 131072, new size 204800 00:07:45.841 [2024-11-05 03:18:59.470480] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.841 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 [2024-11-05 03:18:59.586294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 [2024-11-05 03:18:59.634036] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:46.100 [2024-11-05 03:18:59.634277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:46.100 [2024-11-05 03:18:59.634376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:46.100 [2024-11-05 03:18:59.634665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.100 [2024-11-05 03:18:59.634914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.100 [2024-11-05 03:18:59.635017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.100 [2024-11-05 03:18:59.635038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 [2024-11-05 03:18:59.641952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:46.100 [2024-11-05 03:18:59.642160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.100 [2024-11-05 03:18:59.642198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:46.100 [2024-11-05 03:18:59.642221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.100 [2024-11-05 03:18:59.645290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.100 [2024-11-05 03:18:59.645517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:46.100 pt0 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 [2024-11-05 03:18:59.648089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev eba6eed6-f73a-4576-9d8f-3efd2860cf8a 00:07:46.100 [2024-11-05 03:18:59.648186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eba6eed6-f73a-4576-9d8f-3efd2860cf8a is claimed 00:07:46.100 [2024-11-05 03:18:59.648366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5c7ea9b1-500d-4888-9f23-0733e6bf62f1 00:07:46.100 [2024-11-05 03:18:59.648405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5c7ea9b1-500d-4888-9f23-0733e6bf62f1 is claimed 00:07:46.100 [2024-11-05 03:18:59.648588] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 5c7ea9b1-500d-4888-9f23-0733e6bf62f1 (2) smaller than existing raid bdev Raid (3) 00:07:46.100 [2024-11-05 03:18:59.648619] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev eba6eed6-f73a-4576-9d8f-3efd2860cf8a: File exists 00:07:46.100 [2024-11-05 03:18:59.648721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:46.100 [2024-11-05 03:18:59.648753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:46.100 [2024-11-05 03:18:59.649045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:46.100 [2024-11-05 03:18:59.649276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:46.100 [2024-11-05 03:18:59.649297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:46.100 [2024-11-05 03:18:59.649511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 [2024-11-05 03:18:59.662409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59942 00:07:46.100 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59942 ']' 00:07:46.101 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59942 00:07:46.101 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:46.101 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.101 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59942 00:07:46.360 killing process with pid 59942 00:07:46.360 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.360 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.360 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59942' 00:07:46.360 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59942 00:07:46.360 [2024-11-05 03:18:59.739037] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.360 03:18:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59942 00:07:46.360 [2024-11-05 03:18:59.739099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.360 [2024-11-05 03:18:59.739153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.360 [2024-11-05 03:18:59.739166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:47.739 [2024-11-05 03:19:00.964760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.339 03:19:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:48.339 00:07:48.339 real 0m4.372s 00:07:48.339 user 0m4.615s 00:07:48.339 sys 0m0.652s 00:07:48.339 ************************************ 00:07:48.339 END TEST raid1_resize_superblock_test 00:07:48.339 ************************************ 00:07:48.339 03:19:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.339 03:19:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.598 03:19:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:48.598 03:19:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:48.598 03:19:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:48.598 03:19:02 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:48.598 03:19:02 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:48.598 03:19:02 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:48.598 03:19:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:48.598 03:19:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.598 03:19:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.598 ************************************ 00:07:48.598 START TEST raid_function_test_raid0 00:07:48.598 ************************************ 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:48.598 Process raid pid: 60045 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60045 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60045' 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60045 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60045 ']' 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:48.598 03:19:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:48.598 [2024-11-05 03:19:02.134244] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:48.598 [2024-11-05 03:19:02.134484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.856 [2024-11-05 03:19:02.321425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.856 [2024-11-05 03:19:02.439836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.115 [2024-11-05 03:19:02.629843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.115 [2024-11-05 03:19:02.629889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:49.683 Base_1 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:49.683 Base_2 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:49.683 [2024-11-05 03:19:03.239486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:49.683 [2024-11-05 03:19:03.241770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:49.683 [2024-11-05 03:19:03.241862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:49.683 [2024-11-05 03:19:03.241885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:49.683 [2024-11-05 03:19:03.242238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.683 [2024-11-05 03:19:03.242491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:49.683 [2024-11-05 03:19:03.242506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:49.683 [2024-11-05 03:19:03.242670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:49.683 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:49.943 [2024-11-05 03:19:03.547635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:49.943 /dev/nbd0 00:07:49.943 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.202 1+0 records in 00:07:50.202 1+0 records out 00:07:50.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371608 s, 11.0 MB/s 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:50.202 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:50.461 { 00:07:50.461 "nbd_device": "/dev/nbd0", 00:07:50.461 "bdev_name": "raid" 00:07:50.461 } 00:07:50.461 ]' 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:50.461 { 00:07:50.461 "nbd_device": "/dev/nbd0", 00:07:50.461 "bdev_name": "raid" 00:07:50.461 } 00:07:50.461 ]' 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:50.461 4096+0 records in 00:07:50.461 4096+0 records out 00:07:50.461 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0353564 s, 59.3 MB/s 00:07:50.461 03:19:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:50.720 4096+0 records in 00:07:50.720 4096+0 records out 00:07:50.720 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.313828 s, 6.7 MB/s 00:07:50.720 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:50.720 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:50.720 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:50.720 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:50.720 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:50.720 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:50.720 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:50.720 128+0 records in 00:07:50.720 128+0 records out 00:07:50.720 65536 bytes (66 kB, 64 KiB) copied, 0.00111251 s, 58.9 MB/s 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:50.721 2035+0 records in 00:07:50.721 2035+0 records out 00:07:50.721 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0124607 s, 83.6 MB/s 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:50.721 456+0 records in 00:07:50.721 456+0 records out 00:07:50.721 233472 bytes (233 kB, 228 KiB) copied, 0.00239484 s, 97.5 MB/s 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:50.721 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.980 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:51.239 [2024-11-05 03:19:04.636916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:51.239 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60045 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60045 ']' 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60045 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.498 03:19:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60045 00:07:51.498 killing process with pid 60045 00:07:51.498 03:19:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.498 03:19:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.498 03:19:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60045' 00:07:51.498 03:19:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60045 00:07:51.498 [2024-11-05 03:19:05.011466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.498 03:19:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60045 00:07:51.498 [2024-11-05 03:19:05.011584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.498 [2024-11-05 03:19:05.011654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.498 [2024-11-05 03:19:05.011680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:51.758 [2024-11-05 03:19:05.209781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.695 03:19:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:52.695 00:07:52.695 real 0m4.142s 00:07:52.696 user 0m5.089s 00:07:52.696 sys 0m0.976s 00:07:52.696 03:19:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.696 03:19:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:52.696 ************************************ 00:07:52.696 END TEST raid_function_test_raid0 00:07:52.696 ************************************ 00:07:52.696 03:19:06 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:52.696 03:19:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:52.696 03:19:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.696 03:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.696 ************************************ 00:07:52.696 START TEST raid_function_test_concat 00:07:52.696 ************************************ 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60174 00:07:52.696 Process raid pid: 60174 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60174' 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60174 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60174 ']' 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.696 03:19:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:52.696 [2024-11-05 03:19:06.316264] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:52.696 [2024-11-05 03:19:06.316458] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.972 [2024-11-05 03:19:06.483110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.239 [2024-11-05 03:19:06.608594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.240 [2024-11-05 03:19:06.797065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.240 [2024-11-05 03:19:06.797129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:53.809 Base_1 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:53.809 Base_2 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:53.809 [2024-11-05 03:19:07.346674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:53.809 [2024-11-05 03:19:07.349052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:53.809 [2024-11-05 03:19:07.349157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:53.809 [2024-11-05 03:19:07.349176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:53.809 [2024-11-05 03:19:07.349572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:53.809 [2024-11-05 03:19:07.349795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:53.809 [2024-11-05 03:19:07.349825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:53.809 [2024-11-05 03:19:07.350028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:53.809 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:54.068 [2024-11-05 03:19:07.678788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:54.068 /dev/nbd0 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.327 1+0 records in 00:07:54.327 1+0 records out 00:07:54.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234812 s, 17.4 MB/s 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:54.327 03:19:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.586 { 00:07:54.586 "nbd_device": "/dev/nbd0", 00:07:54.586 "bdev_name": "raid" 00:07:54.586 } 00:07:54.586 ]' 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.586 { 00:07:54.586 "nbd_device": "/dev/nbd0", 00:07:54.586 "bdev_name": "raid" 00:07:54.586 } 00:07:54.586 ]' 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:54.586 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:54.587 4096+0 records in 00:07:54.587 4096+0 records out 00:07:54.587 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.030844 s, 68.0 MB/s 00:07:54.587 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:54.846 4096+0 records in 00:07:54.846 4096+0 records out 00:07:54.846 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.306933 s, 6.8 MB/s 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:54.846 128+0 records in 00:07:54.846 128+0 records out 00:07:54.846 65536 bytes (66 kB, 64 KiB) copied, 0.00110933 s, 59.1 MB/s 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:54.846 2035+0 records in 00:07:54.846 2035+0 records out 00:07:54.846 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0119616 s, 87.1 MB/s 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:54.846 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:55.105 456+0 records in 00:07:55.105 456+0 records out 00:07:55.105 233472 bytes (233 kB, 228 KiB) copied, 0.00199385 s, 117 MB/s 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.105 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:55.364 [2024-11-05 03:19:08.781002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.364 03:19:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60174 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60174 ']' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60174 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60174 00:07:55.623 killing process with pid 60174 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60174' 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60174 00:07:55.623 [2024-11-05 03:19:09.133709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.623 03:19:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60174 00:07:55.623 [2024-11-05 03:19:09.133817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.623 [2024-11-05 03:19:09.133879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.623 [2024-11-05 03:19:09.133896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:55.882 [2024-11-05 03:19:09.300778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.839 03:19:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:56.839 00:07:56.839 real 0m4.010s 00:07:56.839 user 0m4.861s 00:07:56.839 sys 0m0.995s 00:07:56.839 03:19:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.839 03:19:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:56.839 ************************************ 00:07:56.839 END TEST raid_function_test_concat 00:07:56.839 ************************************ 00:07:56.839 03:19:10 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:56.839 03:19:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:56.839 03:19:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.839 03:19:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.839 ************************************ 00:07:56.839 START TEST raid0_resize_test 00:07:56.839 ************************************ 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60303 00:07:56.839 Process raid pid: 60303 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60303' 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60303 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60303 ']' 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:56.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:56.839 03:19:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.839 [2024-11-05 03:19:10.374400] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:56.839 [2024-11-05 03:19:10.374557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.098 [2024-11-05 03:19:10.546402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.098 [2024-11-05 03:19:10.661200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.356 [2024-11-05 03:19:10.868897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.356 [2024-11-05 03:19:10.868950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.925 Base_1 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.925 Base_2 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.925 [2024-11-05 03:19:11.420586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:57.925 [2024-11-05 03:19:11.423235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:57.925 [2024-11-05 03:19:11.423349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:57.925 [2024-11-05 03:19:11.423368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:57.925 [2024-11-05 03:19:11.423716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:57.925 [2024-11-05 03:19:11.423885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:57.925 [2024-11-05 03:19:11.423901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:57.925 [2024-11-05 03:19:11.424066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.925 [2024-11-05 03:19:11.428517] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:57.925 [2024-11-05 03:19:11.428570] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:57.925 true 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.925 [2024-11-05 03:19:11.440752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.925 [2024-11-05 03:19:11.492519] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:57.925 [2024-11-05 03:19:11.492551] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:57.925 [2024-11-05 03:19:11.492586] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:57.925 true 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.925 [2024-11-05 03:19:11.504758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60303 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60303 ']' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60303 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.925 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60303 00:07:58.185 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:58.185 killing process with pid 60303 00:07:58.185 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:58.185 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60303' 00:07:58.185 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60303 00:07:58.185 [2024-11-05 03:19:11.574492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.185 03:19:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60303 00:07:58.185 [2024-11-05 03:19:11.574745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.185 [2024-11-05 03:19:11.574817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.185 [2024-11-05 03:19:11.574834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:58.185 [2024-11-05 03:19:11.591377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.121 ************************************ 00:07:59.121 END TEST raid0_resize_test 00:07:59.121 ************************************ 00:07:59.121 03:19:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:59.121 00:07:59.121 real 0m2.288s 00:07:59.121 user 0m2.549s 00:07:59.121 sys 0m0.386s 00:07:59.121 03:19:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.121 03:19:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.121 03:19:12 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:59.121 03:19:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:59.121 03:19:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.121 03:19:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.121 ************************************ 00:07:59.121 START TEST raid1_resize_test 00:07:59.121 ************************************ 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:59.121 Process raid pid: 60359 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60359 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60359' 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60359 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60359 ']' 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.121 03:19:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.121 [2024-11-05 03:19:12.715748] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:59.121 [2024-11-05 03:19:12.716132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.380 [2024-11-05 03:19:12.890274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.380 [2024-11-05 03:19:13.013473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.639 [2024-11-05 03:19:13.207567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.639 [2024-11-05 03:19:13.207853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.207 Base_1 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.207 Base_2 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.207 [2024-11-05 03:19:13.668639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:00.207 [2024-11-05 03:19:13.670989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:00.207 [2024-11-05 03:19:13.671061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:00.207 [2024-11-05 03:19:13.671080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:00.207 [2024-11-05 03:19:13.671393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:00.207 [2024-11-05 03:19:13.671575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:00.207 [2024-11-05 03:19:13.671592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:00.207 [2024-11-05 03:19:13.671829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.207 [2024-11-05 03:19:13.676636] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:00.207 [2024-11-05 03:19:13.676816] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:00.207 true 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.207 [2024-11-05 03:19:13.688843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.207 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.208 [2024-11-05 03:19:13.736669] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:00.208 [2024-11-05 03:19:13.736695] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:00.208 [2024-11-05 03:19:13.736747] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:00.208 true 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.208 [2024-11-05 03:19:13.748892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60359 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60359 ']' 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60359 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60359 00:08:00.208 killing process with pid 60359 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60359' 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60359 00:08:00.208 [2024-11-05 03:19:13.828916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.208 03:19:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60359 00:08:00.208 [2024-11-05 03:19:13.828994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.208 [2024-11-05 03:19:13.829588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.208 [2024-11-05 03:19:13.829618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:00.467 [2024-11-05 03:19:13.845692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.404 03:19:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:01.404 00:08:01.404 real 0m2.243s 00:08:01.404 user 0m2.450s 00:08:01.404 sys 0m0.357s 00:08:01.404 ************************************ 00:08:01.404 END TEST raid1_resize_test 00:08:01.404 ************************************ 00:08:01.404 03:19:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.404 03:19:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.404 03:19:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:01.404 03:19:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:01.404 03:19:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:01.404 03:19:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:01.404 03:19:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.404 03:19:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.404 ************************************ 00:08:01.404 START TEST raid_state_function_test 00:08:01.404 ************************************ 00:08:01.404 03:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:08:01.404 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:01.404 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:01.404 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:01.404 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:01.404 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:01.404 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:01.405 Process raid pid: 60416 00:08:01.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60416 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60416' 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60416 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60416 ']' 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:01.405 03:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.405 [2024-11-05 03:19:15.038173] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:01.405 [2024-11-05 03:19:15.039002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.664 [2024-11-05 03:19:15.224833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.923 [2024-11-05 03:19:15.343499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.923 [2024-11-05 03:19:15.548886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.923 [2024-11-05 03:19:15.549104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.491 [2024-11-05 03:19:16.008112] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.491 [2024-11-05 03:19:16.008409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.491 [2024-11-05 03:19:16.008439] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.491 [2024-11-05 03:19:16.008459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.491 "name": "Existed_Raid", 00:08:02.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.491 "strip_size_kb": 64, 00:08:02.491 "state": "configuring", 00:08:02.491 "raid_level": "raid0", 00:08:02.491 "superblock": false, 00:08:02.491 "num_base_bdevs": 2, 00:08:02.491 "num_base_bdevs_discovered": 0, 00:08:02.491 "num_base_bdevs_operational": 2, 00:08:02.491 "base_bdevs_list": [ 00:08:02.491 { 00:08:02.491 "name": "BaseBdev1", 00:08:02.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.491 "is_configured": false, 00:08:02.491 "data_offset": 0, 00:08:02.491 "data_size": 0 00:08:02.491 }, 00:08:02.491 { 00:08:02.491 "name": "BaseBdev2", 00:08:02.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.491 "is_configured": false, 00:08:02.491 "data_offset": 0, 00:08:02.491 "data_size": 0 00:08:02.491 } 00:08:02.491 ] 00:08:02.491 }' 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.491 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.059 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.059 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.059 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.059 [2024-11-05 03:19:16.504179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.059 [2024-11-05 03:19:16.504391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:03.059 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.059 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.059 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.059 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.059 [2024-11-05 03:19:16.512175] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.059 [2024-11-05 03:19:16.512361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.059 [2024-11-05 03:19:16.512515] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.060 [2024-11-05 03:19:16.512581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.060 BaseBdev1 00:08:03.060 [2024-11-05 03:19:16.558320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.060 [ 00:08:03.060 { 00:08:03.060 "name": "BaseBdev1", 00:08:03.060 "aliases": [ 00:08:03.060 "f4209b37-047a-47ca-950a-0614627eda81" 00:08:03.060 ], 00:08:03.060 "product_name": "Malloc disk", 00:08:03.060 "block_size": 512, 00:08:03.060 "num_blocks": 65536, 00:08:03.060 "uuid": "f4209b37-047a-47ca-950a-0614627eda81", 00:08:03.060 "assigned_rate_limits": { 00:08:03.060 "rw_ios_per_sec": 0, 00:08:03.060 "rw_mbytes_per_sec": 0, 00:08:03.060 "r_mbytes_per_sec": 0, 00:08:03.060 "w_mbytes_per_sec": 0 00:08:03.060 }, 00:08:03.060 "claimed": true, 00:08:03.060 "claim_type": "exclusive_write", 00:08:03.060 "zoned": false, 00:08:03.060 "supported_io_types": { 00:08:03.060 "read": true, 00:08:03.060 "write": true, 00:08:03.060 "unmap": true, 00:08:03.060 "flush": true, 00:08:03.060 "reset": true, 00:08:03.060 "nvme_admin": false, 00:08:03.060 "nvme_io": false, 00:08:03.060 "nvme_io_md": false, 00:08:03.060 "write_zeroes": true, 00:08:03.060 "zcopy": true, 00:08:03.060 "get_zone_info": false, 00:08:03.060 "zone_management": false, 00:08:03.060 "zone_append": false, 00:08:03.060 "compare": false, 00:08:03.060 "compare_and_write": false, 00:08:03.060 "abort": true, 00:08:03.060 "seek_hole": false, 00:08:03.060 "seek_data": false, 00:08:03.060 "copy": true, 00:08:03.060 "nvme_iov_md": false 00:08:03.060 }, 00:08:03.060 "memory_domains": [ 00:08:03.060 { 00:08:03.060 "dma_device_id": "system", 00:08:03.060 "dma_device_type": 1 00:08:03.060 }, 00:08:03.060 { 00:08:03.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.060 "dma_device_type": 2 00:08:03.060 } 00:08:03.060 ], 00:08:03.060 "driver_specific": {} 00:08:03.060 } 00:08:03.060 ] 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.060 "name": "Existed_Raid", 00:08:03.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.060 "strip_size_kb": 64, 00:08:03.060 "state": "configuring", 00:08:03.060 "raid_level": "raid0", 00:08:03.060 "superblock": false, 00:08:03.060 "num_base_bdevs": 2, 00:08:03.060 "num_base_bdevs_discovered": 1, 00:08:03.060 "num_base_bdevs_operational": 2, 00:08:03.060 "base_bdevs_list": [ 00:08:03.060 { 00:08:03.060 "name": "BaseBdev1", 00:08:03.060 "uuid": "f4209b37-047a-47ca-950a-0614627eda81", 00:08:03.060 "is_configured": true, 00:08:03.060 "data_offset": 0, 00:08:03.060 "data_size": 65536 00:08:03.060 }, 00:08:03.060 { 00:08:03.060 "name": "BaseBdev2", 00:08:03.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.060 "is_configured": false, 00:08:03.060 "data_offset": 0, 00:08:03.060 "data_size": 0 00:08:03.060 } 00:08:03.060 ] 00:08:03.060 }' 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.060 03:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.636 [2024-11-05 03:19:17.106561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.636 [2024-11-05 03:19:17.106753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.636 [2024-11-05 03:19:17.114609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.636 [2024-11-05 03:19:17.117179] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.636 [2024-11-05 03:19:17.117371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.636 "name": "Existed_Raid", 00:08:03.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.636 "strip_size_kb": 64, 00:08:03.636 "state": "configuring", 00:08:03.636 "raid_level": "raid0", 00:08:03.636 "superblock": false, 00:08:03.636 "num_base_bdevs": 2, 00:08:03.636 "num_base_bdevs_discovered": 1, 00:08:03.636 "num_base_bdevs_operational": 2, 00:08:03.636 "base_bdevs_list": [ 00:08:03.636 { 00:08:03.636 "name": "BaseBdev1", 00:08:03.636 "uuid": "f4209b37-047a-47ca-950a-0614627eda81", 00:08:03.636 "is_configured": true, 00:08:03.636 "data_offset": 0, 00:08:03.636 "data_size": 65536 00:08:03.636 }, 00:08:03.636 { 00:08:03.636 "name": "BaseBdev2", 00:08:03.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.636 "is_configured": false, 00:08:03.636 "data_offset": 0, 00:08:03.636 "data_size": 0 00:08:03.636 } 00:08:03.636 ] 00:08:03.636 }' 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.636 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.218 [2024-11-05 03:19:17.675034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.218 [2024-11-05 03:19:17.675283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.218 [2024-11-05 03:19:17.675308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:04.218 [2024-11-05 03:19:17.675713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:04.218 [2024-11-05 03:19:17.675945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.218 [2024-11-05 03:19:17.675967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:04.218 [2024-11-05 03:19:17.676264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.218 BaseBdev2 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.218 [ 00:08:04.218 { 00:08:04.218 "name": "BaseBdev2", 00:08:04.218 "aliases": [ 00:08:04.218 "dc382a5d-87f0-4305-9132-87c45db69c8f" 00:08:04.218 ], 00:08:04.218 "product_name": "Malloc disk", 00:08:04.218 "block_size": 512, 00:08:04.218 "num_blocks": 65536, 00:08:04.218 "uuid": "dc382a5d-87f0-4305-9132-87c45db69c8f", 00:08:04.218 "assigned_rate_limits": { 00:08:04.218 "rw_ios_per_sec": 0, 00:08:04.218 "rw_mbytes_per_sec": 0, 00:08:04.218 "r_mbytes_per_sec": 0, 00:08:04.218 "w_mbytes_per_sec": 0 00:08:04.218 }, 00:08:04.218 "claimed": true, 00:08:04.218 "claim_type": "exclusive_write", 00:08:04.218 "zoned": false, 00:08:04.218 "supported_io_types": { 00:08:04.218 "read": true, 00:08:04.218 "write": true, 00:08:04.218 "unmap": true, 00:08:04.218 "flush": true, 00:08:04.218 "reset": true, 00:08:04.218 "nvme_admin": false, 00:08:04.218 "nvme_io": false, 00:08:04.218 "nvme_io_md": false, 00:08:04.218 "write_zeroes": true, 00:08:04.218 "zcopy": true, 00:08:04.218 "get_zone_info": false, 00:08:04.218 "zone_management": false, 00:08:04.218 "zone_append": false, 00:08:04.218 "compare": false, 00:08:04.218 "compare_and_write": false, 00:08:04.218 "abort": true, 00:08:04.218 "seek_hole": false, 00:08:04.218 "seek_data": false, 00:08:04.218 "copy": true, 00:08:04.218 "nvme_iov_md": false 00:08:04.218 }, 00:08:04.218 "memory_domains": [ 00:08:04.218 { 00:08:04.218 "dma_device_id": "system", 00:08:04.218 "dma_device_type": 1 00:08:04.218 }, 00:08:04.218 { 00:08:04.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.218 "dma_device_type": 2 00:08:04.218 } 00:08:04.218 ], 00:08:04.218 "driver_specific": {} 00:08:04.218 } 00:08:04.218 ] 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.218 "name": "Existed_Raid", 00:08:04.218 "uuid": "cadbc0c8-5699-47dd-b9ad-20ea225502c8", 00:08:04.218 "strip_size_kb": 64, 00:08:04.218 "state": "online", 00:08:04.218 "raid_level": "raid0", 00:08:04.218 "superblock": false, 00:08:04.218 "num_base_bdevs": 2, 00:08:04.218 "num_base_bdevs_discovered": 2, 00:08:04.218 "num_base_bdevs_operational": 2, 00:08:04.218 "base_bdevs_list": [ 00:08:04.218 { 00:08:04.218 "name": "BaseBdev1", 00:08:04.218 "uuid": "f4209b37-047a-47ca-950a-0614627eda81", 00:08:04.218 "is_configured": true, 00:08:04.218 "data_offset": 0, 00:08:04.218 "data_size": 65536 00:08:04.218 }, 00:08:04.218 { 00:08:04.218 "name": "BaseBdev2", 00:08:04.218 "uuid": "dc382a5d-87f0-4305-9132-87c45db69c8f", 00:08:04.218 "is_configured": true, 00:08:04.218 "data_offset": 0, 00:08:04.218 "data_size": 65536 00:08:04.218 } 00:08:04.218 ] 00:08:04.218 }' 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.218 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.785 [2024-11-05 03:19:18.223634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.785 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.785 "name": "Existed_Raid", 00:08:04.785 "aliases": [ 00:08:04.785 "cadbc0c8-5699-47dd-b9ad-20ea225502c8" 00:08:04.785 ], 00:08:04.785 "product_name": "Raid Volume", 00:08:04.785 "block_size": 512, 00:08:04.785 "num_blocks": 131072, 00:08:04.785 "uuid": "cadbc0c8-5699-47dd-b9ad-20ea225502c8", 00:08:04.785 "assigned_rate_limits": { 00:08:04.785 "rw_ios_per_sec": 0, 00:08:04.785 "rw_mbytes_per_sec": 0, 00:08:04.785 "r_mbytes_per_sec": 0, 00:08:04.785 "w_mbytes_per_sec": 0 00:08:04.785 }, 00:08:04.785 "claimed": false, 00:08:04.786 "zoned": false, 00:08:04.786 "supported_io_types": { 00:08:04.786 "read": true, 00:08:04.786 "write": true, 00:08:04.786 "unmap": true, 00:08:04.786 "flush": true, 00:08:04.786 "reset": true, 00:08:04.786 "nvme_admin": false, 00:08:04.786 "nvme_io": false, 00:08:04.786 "nvme_io_md": false, 00:08:04.786 "write_zeroes": true, 00:08:04.786 "zcopy": false, 00:08:04.786 "get_zone_info": false, 00:08:04.786 "zone_management": false, 00:08:04.786 "zone_append": false, 00:08:04.786 "compare": false, 00:08:04.786 "compare_and_write": false, 00:08:04.786 "abort": false, 00:08:04.786 "seek_hole": false, 00:08:04.786 "seek_data": false, 00:08:04.786 "copy": false, 00:08:04.786 "nvme_iov_md": false 00:08:04.786 }, 00:08:04.786 "memory_domains": [ 00:08:04.786 { 00:08:04.786 "dma_device_id": "system", 00:08:04.786 "dma_device_type": 1 00:08:04.786 }, 00:08:04.786 { 00:08:04.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.786 "dma_device_type": 2 00:08:04.786 }, 00:08:04.786 { 00:08:04.786 "dma_device_id": "system", 00:08:04.786 "dma_device_type": 1 00:08:04.786 }, 00:08:04.786 { 00:08:04.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.786 "dma_device_type": 2 00:08:04.786 } 00:08:04.786 ], 00:08:04.786 "driver_specific": { 00:08:04.786 "raid": { 00:08:04.786 "uuid": "cadbc0c8-5699-47dd-b9ad-20ea225502c8", 00:08:04.786 "strip_size_kb": 64, 00:08:04.786 "state": "online", 00:08:04.786 "raid_level": "raid0", 00:08:04.786 "superblock": false, 00:08:04.786 "num_base_bdevs": 2, 00:08:04.786 "num_base_bdevs_discovered": 2, 00:08:04.786 "num_base_bdevs_operational": 2, 00:08:04.786 "base_bdevs_list": [ 00:08:04.786 { 00:08:04.786 "name": "BaseBdev1", 00:08:04.786 "uuid": "f4209b37-047a-47ca-950a-0614627eda81", 00:08:04.786 "is_configured": true, 00:08:04.786 "data_offset": 0, 00:08:04.786 "data_size": 65536 00:08:04.786 }, 00:08:04.786 { 00:08:04.786 "name": "BaseBdev2", 00:08:04.786 "uuid": "dc382a5d-87f0-4305-9132-87c45db69c8f", 00:08:04.786 "is_configured": true, 00:08:04.786 "data_offset": 0, 00:08:04.786 "data_size": 65536 00:08:04.786 } 00:08:04.786 ] 00:08:04.786 } 00:08:04.786 } 00:08:04.786 }' 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:04.786 BaseBdev2' 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.786 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.045 [2024-11-05 03:19:18.479440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.045 [2024-11-05 03:19:18.479678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.045 [2024-11-05 03:19:18.479898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.045 "name": "Existed_Raid", 00:08:05.045 "uuid": "cadbc0c8-5699-47dd-b9ad-20ea225502c8", 00:08:05.045 "strip_size_kb": 64, 00:08:05.045 "state": "offline", 00:08:05.045 "raid_level": "raid0", 00:08:05.045 "superblock": false, 00:08:05.045 "num_base_bdevs": 2, 00:08:05.045 "num_base_bdevs_discovered": 1, 00:08:05.045 "num_base_bdevs_operational": 1, 00:08:05.045 "base_bdevs_list": [ 00:08:05.045 { 00:08:05.045 "name": null, 00:08:05.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.045 "is_configured": false, 00:08:05.045 "data_offset": 0, 00:08:05.045 "data_size": 65536 00:08:05.045 }, 00:08:05.045 { 00:08:05.045 "name": "BaseBdev2", 00:08:05.045 "uuid": "dc382a5d-87f0-4305-9132-87c45db69c8f", 00:08:05.045 "is_configured": true, 00:08:05.045 "data_offset": 0, 00:08:05.045 "data_size": 65536 00:08:05.045 } 00:08:05.045 ] 00:08:05.045 }' 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.045 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.613 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:05.613 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.614 [2024-11-05 03:19:19.128571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:05.614 [2024-11-05 03:19:19.128866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:05.614 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60416 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60416 ']' 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60416 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60416 00:08:05.872 killing process with pid 60416 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60416' 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60416 00:08:05.872 [2024-11-05 03:19:19.300246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.872 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60416 00:08:05.872 [2024-11-05 03:19:19.315007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.807 ************************************ 00:08:06.807 END TEST raid_state_function_test 00:08:06.807 ************************************ 00:08:06.807 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.808 00:08:06.808 real 0m5.409s 00:08:06.808 user 0m8.212s 00:08:06.808 sys 0m0.741s 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.808 03:19:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:06.808 03:19:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:06.808 03:19:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.808 03:19:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.808 ************************************ 00:08:06.808 START TEST raid_state_function_test_sb 00:08:06.808 ************************************ 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:06.808 Process raid pid: 60675 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60675 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60675' 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60675 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60675 ']' 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.808 03:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.066 [2024-11-05 03:19:20.473175] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:07.066 [2024-11-05 03:19:20.473383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.067 [2024-11-05 03:19:20.646590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.325 [2024-11-05 03:19:20.772511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.584 [2024-11-05 03:19:20.984715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.584 [2024-11-05 03:19:20.984792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 [2024-11-05 03:19:21.497632] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.152 [2024-11-05 03:19:21.497859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.152 [2024-11-05 03:19:21.498029] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.152 [2024-11-05 03:19:21.498065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.152 "name": "Existed_Raid", 00:08:08.152 "uuid": "da511ebd-d7fc-466f-8581-0a265093a7fb", 00:08:08.152 "strip_size_kb": 64, 00:08:08.152 "state": "configuring", 00:08:08.152 "raid_level": "raid0", 00:08:08.152 "superblock": true, 00:08:08.152 "num_base_bdevs": 2, 00:08:08.152 "num_base_bdevs_discovered": 0, 00:08:08.152 "num_base_bdevs_operational": 2, 00:08:08.152 "base_bdevs_list": [ 00:08:08.152 { 00:08:08.152 "name": "BaseBdev1", 00:08:08.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.152 "is_configured": false, 00:08:08.152 "data_offset": 0, 00:08:08.152 "data_size": 0 00:08:08.152 }, 00:08:08.152 { 00:08:08.152 "name": "BaseBdev2", 00:08:08.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.152 "is_configured": false, 00:08:08.152 "data_offset": 0, 00:08:08.152 "data_size": 0 00:08:08.152 } 00:08:08.152 ] 00:08:08.152 }' 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.152 03:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.411 [2024-11-05 03:19:22.009757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.411 [2024-11-05 03:19:22.010000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.411 [2024-11-05 03:19:22.021776] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.411 [2024-11-05 03:19:22.021967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.411 [2024-11-05 03:19:22.022117] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.411 [2024-11-05 03:19:22.022290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.411 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.671 [2024-11-05 03:19:22.070832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.671 BaseBdev1 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.671 [ 00:08:08.671 { 00:08:08.671 "name": "BaseBdev1", 00:08:08.671 "aliases": [ 00:08:08.671 "3e61c63c-bf22-4109-abc0-970a397909eb" 00:08:08.671 ], 00:08:08.671 "product_name": "Malloc disk", 00:08:08.671 "block_size": 512, 00:08:08.671 "num_blocks": 65536, 00:08:08.671 "uuid": "3e61c63c-bf22-4109-abc0-970a397909eb", 00:08:08.671 "assigned_rate_limits": { 00:08:08.671 "rw_ios_per_sec": 0, 00:08:08.671 "rw_mbytes_per_sec": 0, 00:08:08.671 "r_mbytes_per_sec": 0, 00:08:08.671 "w_mbytes_per_sec": 0 00:08:08.671 }, 00:08:08.671 "claimed": true, 00:08:08.671 "claim_type": "exclusive_write", 00:08:08.671 "zoned": false, 00:08:08.671 "supported_io_types": { 00:08:08.671 "read": true, 00:08:08.671 "write": true, 00:08:08.671 "unmap": true, 00:08:08.671 "flush": true, 00:08:08.671 "reset": true, 00:08:08.671 "nvme_admin": false, 00:08:08.671 "nvme_io": false, 00:08:08.671 "nvme_io_md": false, 00:08:08.671 "write_zeroes": true, 00:08:08.671 "zcopy": true, 00:08:08.671 "get_zone_info": false, 00:08:08.671 "zone_management": false, 00:08:08.671 "zone_append": false, 00:08:08.671 "compare": false, 00:08:08.671 "compare_and_write": false, 00:08:08.671 "abort": true, 00:08:08.671 "seek_hole": false, 00:08:08.671 "seek_data": false, 00:08:08.671 "copy": true, 00:08:08.671 "nvme_iov_md": false 00:08:08.671 }, 00:08:08.671 "memory_domains": [ 00:08:08.671 { 00:08:08.671 "dma_device_id": "system", 00:08:08.671 "dma_device_type": 1 00:08:08.671 }, 00:08:08.671 { 00:08:08.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.671 "dma_device_type": 2 00:08:08.671 } 00:08:08.671 ], 00:08:08.671 "driver_specific": {} 00:08:08.671 } 00:08:08.671 ] 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.671 "name": "Existed_Raid", 00:08:08.671 "uuid": "90a6ec47-990c-41f5-b6b2-5edd1f7e69fe", 00:08:08.671 "strip_size_kb": 64, 00:08:08.671 "state": "configuring", 00:08:08.671 "raid_level": "raid0", 00:08:08.671 "superblock": true, 00:08:08.671 "num_base_bdevs": 2, 00:08:08.671 "num_base_bdevs_discovered": 1, 00:08:08.671 "num_base_bdevs_operational": 2, 00:08:08.671 "base_bdevs_list": [ 00:08:08.671 { 00:08:08.671 "name": "BaseBdev1", 00:08:08.671 "uuid": "3e61c63c-bf22-4109-abc0-970a397909eb", 00:08:08.671 "is_configured": true, 00:08:08.671 "data_offset": 2048, 00:08:08.671 "data_size": 63488 00:08:08.671 }, 00:08:08.671 { 00:08:08.671 "name": "BaseBdev2", 00:08:08.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.671 "is_configured": false, 00:08:08.671 "data_offset": 0, 00:08:08.671 "data_size": 0 00:08:08.671 } 00:08:08.671 ] 00:08:08.671 }' 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.671 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.255 [2024-11-05 03:19:22.651094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.255 [2024-11-05 03:19:22.651184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.255 [2024-11-05 03:19:22.659180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.255 [2024-11-05 03:19:22.662155] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.255 [2024-11-05 03:19:22.662347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.255 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.256 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.256 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.256 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.256 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.256 "name": "Existed_Raid", 00:08:09.256 "uuid": "137d7d52-b566-4e8a-89eb-5326dec61dc9", 00:08:09.256 "strip_size_kb": 64, 00:08:09.256 "state": "configuring", 00:08:09.256 "raid_level": "raid0", 00:08:09.256 "superblock": true, 00:08:09.256 "num_base_bdevs": 2, 00:08:09.256 "num_base_bdevs_discovered": 1, 00:08:09.256 "num_base_bdevs_operational": 2, 00:08:09.256 "base_bdevs_list": [ 00:08:09.256 { 00:08:09.256 "name": "BaseBdev1", 00:08:09.256 "uuid": "3e61c63c-bf22-4109-abc0-970a397909eb", 00:08:09.256 "is_configured": true, 00:08:09.256 "data_offset": 2048, 00:08:09.256 "data_size": 63488 00:08:09.256 }, 00:08:09.256 { 00:08:09.256 "name": "BaseBdev2", 00:08:09.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.256 "is_configured": false, 00:08:09.256 "data_offset": 0, 00:08:09.256 "data_size": 0 00:08:09.256 } 00:08:09.256 ] 00:08:09.256 }' 00:08:09.256 03:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.256 03:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.838 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.839 BaseBdev2 00:08:09.839 [2024-11-05 03:19:23.248598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.839 [2024-11-05 03:19:23.248955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.839 [2024-11-05 03:19:23.248987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:09.839 [2024-11-05 03:19:23.249409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.839 [2024-11-05 03:19:23.249609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.839 [2024-11-05 03:19:23.249630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.839 [2024-11-05 03:19:23.249832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.839 [ 00:08:09.839 { 00:08:09.839 "name": "BaseBdev2", 00:08:09.839 "aliases": [ 00:08:09.839 "a4ba6f77-d4c3-4cdd-b175-e89d68d889eb" 00:08:09.839 ], 00:08:09.839 "product_name": "Malloc disk", 00:08:09.839 "block_size": 512, 00:08:09.839 "num_blocks": 65536, 00:08:09.839 "uuid": "a4ba6f77-d4c3-4cdd-b175-e89d68d889eb", 00:08:09.839 "assigned_rate_limits": { 00:08:09.839 "rw_ios_per_sec": 0, 00:08:09.839 "rw_mbytes_per_sec": 0, 00:08:09.839 "r_mbytes_per_sec": 0, 00:08:09.839 "w_mbytes_per_sec": 0 00:08:09.839 }, 00:08:09.839 "claimed": true, 00:08:09.839 "claim_type": "exclusive_write", 00:08:09.839 "zoned": false, 00:08:09.839 "supported_io_types": { 00:08:09.839 "read": true, 00:08:09.839 "write": true, 00:08:09.839 "unmap": true, 00:08:09.839 "flush": true, 00:08:09.839 "reset": true, 00:08:09.839 "nvme_admin": false, 00:08:09.839 "nvme_io": false, 00:08:09.839 "nvme_io_md": false, 00:08:09.839 "write_zeroes": true, 00:08:09.839 "zcopy": true, 00:08:09.839 "get_zone_info": false, 00:08:09.839 "zone_management": false, 00:08:09.839 "zone_append": false, 00:08:09.839 "compare": false, 00:08:09.839 "compare_and_write": false, 00:08:09.839 "abort": true, 00:08:09.839 "seek_hole": false, 00:08:09.839 "seek_data": false, 00:08:09.839 "copy": true, 00:08:09.839 "nvme_iov_md": false 00:08:09.839 }, 00:08:09.839 "memory_domains": [ 00:08:09.839 { 00:08:09.839 "dma_device_id": "system", 00:08:09.839 "dma_device_type": 1 00:08:09.839 }, 00:08:09.839 { 00:08:09.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.839 "dma_device_type": 2 00:08:09.839 } 00:08:09.839 ], 00:08:09.839 "driver_specific": {} 00:08:09.839 } 00:08:09.839 ] 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.839 "name": "Existed_Raid", 00:08:09.839 "uuid": "137d7d52-b566-4e8a-89eb-5326dec61dc9", 00:08:09.839 "strip_size_kb": 64, 00:08:09.839 "state": "online", 00:08:09.839 "raid_level": "raid0", 00:08:09.839 "superblock": true, 00:08:09.839 "num_base_bdevs": 2, 00:08:09.839 "num_base_bdevs_discovered": 2, 00:08:09.839 "num_base_bdevs_operational": 2, 00:08:09.839 "base_bdevs_list": [ 00:08:09.839 { 00:08:09.839 "name": "BaseBdev1", 00:08:09.839 "uuid": "3e61c63c-bf22-4109-abc0-970a397909eb", 00:08:09.839 "is_configured": true, 00:08:09.839 "data_offset": 2048, 00:08:09.839 "data_size": 63488 00:08:09.839 }, 00:08:09.839 { 00:08:09.839 "name": "BaseBdev2", 00:08:09.839 "uuid": "a4ba6f77-d4c3-4cdd-b175-e89d68d889eb", 00:08:09.839 "is_configured": true, 00:08:09.839 "data_offset": 2048, 00:08:09.839 "data_size": 63488 00:08:09.839 } 00:08:09.839 ] 00:08:09.839 }' 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.839 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.407 [2024-11-05 03:19:23.817183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.407 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.407 "name": "Existed_Raid", 00:08:10.407 "aliases": [ 00:08:10.407 "137d7d52-b566-4e8a-89eb-5326dec61dc9" 00:08:10.407 ], 00:08:10.407 "product_name": "Raid Volume", 00:08:10.407 "block_size": 512, 00:08:10.407 "num_blocks": 126976, 00:08:10.407 "uuid": "137d7d52-b566-4e8a-89eb-5326dec61dc9", 00:08:10.407 "assigned_rate_limits": { 00:08:10.407 "rw_ios_per_sec": 0, 00:08:10.407 "rw_mbytes_per_sec": 0, 00:08:10.407 "r_mbytes_per_sec": 0, 00:08:10.407 "w_mbytes_per_sec": 0 00:08:10.407 }, 00:08:10.407 "claimed": false, 00:08:10.407 "zoned": false, 00:08:10.407 "supported_io_types": { 00:08:10.407 "read": true, 00:08:10.407 "write": true, 00:08:10.407 "unmap": true, 00:08:10.407 "flush": true, 00:08:10.407 "reset": true, 00:08:10.407 "nvme_admin": false, 00:08:10.407 "nvme_io": false, 00:08:10.407 "nvme_io_md": false, 00:08:10.407 "write_zeroes": true, 00:08:10.407 "zcopy": false, 00:08:10.407 "get_zone_info": false, 00:08:10.407 "zone_management": false, 00:08:10.407 "zone_append": false, 00:08:10.407 "compare": false, 00:08:10.407 "compare_and_write": false, 00:08:10.407 "abort": false, 00:08:10.407 "seek_hole": false, 00:08:10.407 "seek_data": false, 00:08:10.407 "copy": false, 00:08:10.407 "nvme_iov_md": false 00:08:10.407 }, 00:08:10.407 "memory_domains": [ 00:08:10.407 { 00:08:10.407 "dma_device_id": "system", 00:08:10.408 "dma_device_type": 1 00:08:10.408 }, 00:08:10.408 { 00:08:10.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.408 "dma_device_type": 2 00:08:10.408 }, 00:08:10.408 { 00:08:10.408 "dma_device_id": "system", 00:08:10.408 "dma_device_type": 1 00:08:10.408 }, 00:08:10.408 { 00:08:10.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.408 "dma_device_type": 2 00:08:10.408 } 00:08:10.408 ], 00:08:10.408 "driver_specific": { 00:08:10.408 "raid": { 00:08:10.408 "uuid": "137d7d52-b566-4e8a-89eb-5326dec61dc9", 00:08:10.408 "strip_size_kb": 64, 00:08:10.408 "state": "online", 00:08:10.408 "raid_level": "raid0", 00:08:10.408 "superblock": true, 00:08:10.408 "num_base_bdevs": 2, 00:08:10.408 "num_base_bdevs_discovered": 2, 00:08:10.408 "num_base_bdevs_operational": 2, 00:08:10.408 "base_bdevs_list": [ 00:08:10.408 { 00:08:10.408 "name": "BaseBdev1", 00:08:10.408 "uuid": "3e61c63c-bf22-4109-abc0-970a397909eb", 00:08:10.408 "is_configured": true, 00:08:10.408 "data_offset": 2048, 00:08:10.408 "data_size": 63488 00:08:10.408 }, 00:08:10.408 { 00:08:10.408 "name": "BaseBdev2", 00:08:10.408 "uuid": "a4ba6f77-d4c3-4cdd-b175-e89d68d889eb", 00:08:10.408 "is_configured": true, 00:08:10.408 "data_offset": 2048, 00:08:10.408 "data_size": 63488 00:08:10.408 } 00:08:10.408 ] 00:08:10.408 } 00:08:10.408 } 00:08:10.408 }' 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.408 BaseBdev2' 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.408 03:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.408 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.408 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.408 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.408 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.408 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.408 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.408 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.667 [2024-11-05 03:19:24.088978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.667 [2024-11-05 03:19:24.089141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.667 [2024-11-05 03:19:24.089335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:10.667 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.668 "name": "Existed_Raid", 00:08:10.668 "uuid": "137d7d52-b566-4e8a-89eb-5326dec61dc9", 00:08:10.668 "strip_size_kb": 64, 00:08:10.668 "state": "offline", 00:08:10.668 "raid_level": "raid0", 00:08:10.668 "superblock": true, 00:08:10.668 "num_base_bdevs": 2, 00:08:10.668 "num_base_bdevs_discovered": 1, 00:08:10.668 "num_base_bdevs_operational": 1, 00:08:10.668 "base_bdevs_list": [ 00:08:10.668 { 00:08:10.668 "name": null, 00:08:10.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.668 "is_configured": false, 00:08:10.668 "data_offset": 0, 00:08:10.668 "data_size": 63488 00:08:10.668 }, 00:08:10.668 { 00:08:10.668 "name": "BaseBdev2", 00:08:10.668 "uuid": "a4ba6f77-d4c3-4cdd-b175-e89d68d889eb", 00:08:10.668 "is_configured": true, 00:08:10.668 "data_offset": 2048, 00:08:10.668 "data_size": 63488 00:08:10.668 } 00:08:10.668 ] 00:08:10.668 }' 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.668 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.236 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 [2024-11-05 03:19:24.783417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.236 [2024-11-05 03:19:24.783628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:11.495 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.495 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60675 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60675 ']' 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60675 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60675 00:08:11.496 killing process with pid 60675 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60675' 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60675 00:08:11.496 [2024-11-05 03:19:24.973535] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.496 03:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60675 00:08:11.496 [2024-11-05 03:19:24.988638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.874 03:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.874 00:08:12.874 real 0m5.690s 00:08:12.874 user 0m8.643s 00:08:12.874 sys 0m0.757s 00:08:12.874 03:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.874 03:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.874 ************************************ 00:08:12.874 END TEST raid_state_function_test_sb 00:08:12.874 ************************************ 00:08:12.874 03:19:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:12.874 03:19:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:12.874 03:19:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.874 03:19:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.874 ************************************ 00:08:12.874 START TEST raid_superblock_test 00:08:12.874 ************************************ 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60932 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60932 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60932 ']' 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.874 03:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.874 [2024-11-05 03:19:26.242806] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:12.874 [2024-11-05 03:19:26.243477] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60932 ] 00:08:12.874 [2024-11-05 03:19:26.437501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.133 [2024-11-05 03:19:26.594108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.392 [2024-11-05 03:19:26.828214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.392 [2024-11-05 03:19:26.828313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.650 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.651 malloc1 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.651 [2024-11-05 03:19:27.254060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.651 [2024-11-05 03:19:27.254293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.651 [2024-11-05 03:19:27.254386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:13.651 [2024-11-05 03:19:27.254626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.651 [2024-11-05 03:19:27.257597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.651 [2024-11-05 03:19:27.257774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.651 pt1 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.651 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.910 malloc2 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.910 [2024-11-05 03:19:27.312633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.910 [2024-11-05 03:19:27.312879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.910 [2024-11-05 03:19:27.312921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:13.910 [2024-11-05 03:19:27.312939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.910 [2024-11-05 03:19:27.315808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.910 [2024-11-05 03:19:27.315871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.910 pt2 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.910 [2024-11-05 03:19:27.320796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.910 [2024-11-05 03:19:27.323509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.910 [2024-11-05 03:19:27.323836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:13.910 [2024-11-05 03:19:27.323966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.910 [2024-11-05 03:19:27.324333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:13.910 [2024-11-05 03:19:27.324653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:13.910 [2024-11-05 03:19:27.324782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:13.910 [2024-11-05 03:19:27.325024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.910 "name": "raid_bdev1", 00:08:13.910 "uuid": "aca522d6-5bca-42a6-bd0a-e0f94833549e", 00:08:13.910 "strip_size_kb": 64, 00:08:13.910 "state": "online", 00:08:13.910 "raid_level": "raid0", 00:08:13.910 "superblock": true, 00:08:13.910 "num_base_bdevs": 2, 00:08:13.910 "num_base_bdevs_discovered": 2, 00:08:13.910 "num_base_bdevs_operational": 2, 00:08:13.910 "base_bdevs_list": [ 00:08:13.910 { 00:08:13.910 "name": "pt1", 00:08:13.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.910 "is_configured": true, 00:08:13.910 "data_offset": 2048, 00:08:13.910 "data_size": 63488 00:08:13.910 }, 00:08:13.910 { 00:08:13.910 "name": "pt2", 00:08:13.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.910 "is_configured": true, 00:08:13.910 "data_offset": 2048, 00:08:13.910 "data_size": 63488 00:08:13.910 } 00:08:13.910 ] 00:08:13.910 }' 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.910 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.478 [2024-11-05 03:19:27.865476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.478 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.478 "name": "raid_bdev1", 00:08:14.478 "aliases": [ 00:08:14.478 "aca522d6-5bca-42a6-bd0a-e0f94833549e" 00:08:14.478 ], 00:08:14.478 "product_name": "Raid Volume", 00:08:14.478 "block_size": 512, 00:08:14.478 "num_blocks": 126976, 00:08:14.478 "uuid": "aca522d6-5bca-42a6-bd0a-e0f94833549e", 00:08:14.478 "assigned_rate_limits": { 00:08:14.478 "rw_ios_per_sec": 0, 00:08:14.478 "rw_mbytes_per_sec": 0, 00:08:14.478 "r_mbytes_per_sec": 0, 00:08:14.478 "w_mbytes_per_sec": 0 00:08:14.479 }, 00:08:14.479 "claimed": false, 00:08:14.479 "zoned": false, 00:08:14.479 "supported_io_types": { 00:08:14.479 "read": true, 00:08:14.479 "write": true, 00:08:14.479 "unmap": true, 00:08:14.479 "flush": true, 00:08:14.479 "reset": true, 00:08:14.479 "nvme_admin": false, 00:08:14.479 "nvme_io": false, 00:08:14.479 "nvme_io_md": false, 00:08:14.479 "write_zeroes": true, 00:08:14.479 "zcopy": false, 00:08:14.479 "get_zone_info": false, 00:08:14.479 "zone_management": false, 00:08:14.479 "zone_append": false, 00:08:14.479 "compare": false, 00:08:14.479 "compare_and_write": false, 00:08:14.479 "abort": false, 00:08:14.479 "seek_hole": false, 00:08:14.479 "seek_data": false, 00:08:14.479 "copy": false, 00:08:14.479 "nvme_iov_md": false 00:08:14.479 }, 00:08:14.479 "memory_domains": [ 00:08:14.479 { 00:08:14.479 "dma_device_id": "system", 00:08:14.479 "dma_device_type": 1 00:08:14.479 }, 00:08:14.479 { 00:08:14.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.479 "dma_device_type": 2 00:08:14.479 }, 00:08:14.479 { 00:08:14.479 "dma_device_id": "system", 00:08:14.479 "dma_device_type": 1 00:08:14.479 }, 00:08:14.479 { 00:08:14.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.479 "dma_device_type": 2 00:08:14.479 } 00:08:14.479 ], 00:08:14.479 "driver_specific": { 00:08:14.479 "raid": { 00:08:14.479 "uuid": "aca522d6-5bca-42a6-bd0a-e0f94833549e", 00:08:14.479 "strip_size_kb": 64, 00:08:14.479 "state": "online", 00:08:14.479 "raid_level": "raid0", 00:08:14.479 "superblock": true, 00:08:14.479 "num_base_bdevs": 2, 00:08:14.479 "num_base_bdevs_discovered": 2, 00:08:14.479 "num_base_bdevs_operational": 2, 00:08:14.479 "base_bdevs_list": [ 00:08:14.479 { 00:08:14.479 "name": "pt1", 00:08:14.479 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.479 "is_configured": true, 00:08:14.479 "data_offset": 2048, 00:08:14.479 "data_size": 63488 00:08:14.479 }, 00:08:14.479 { 00:08:14.479 "name": "pt2", 00:08:14.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.479 "is_configured": true, 00:08:14.479 "data_offset": 2048, 00:08:14.479 "data_size": 63488 00:08:14.479 } 00:08:14.479 ] 00:08:14.479 } 00:08:14.479 } 00:08:14.479 }' 00:08:14.479 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.479 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.479 pt2' 00:08:14.479 03:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.479 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 [2024-11-05 03:19:28.133528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aca522d6-5bca-42a6-bd0a-e0f94833549e 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aca522d6-5bca-42a6-bd0a-e0f94833549e ']' 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 [2024-11-05 03:19:28.181172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.801 [2024-11-05 03:19:28.181356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.801 [2024-11-05 03:19:28.181560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.801 [2024-11-05 03:19:28.181641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.801 [2024-11-05 03:19:28.181662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.801 [2024-11-05 03:19:28.321271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:14.801 [2024-11-05 03:19:28.324142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:14.801 [2024-11-05 03:19:28.324244] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:14.801 [2024-11-05 03:19:28.324344] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:14.801 [2024-11-05 03:19:28.324372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.801 [2024-11-05 03:19:28.324391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:14.801 request: 00:08:14.801 { 00:08:14.801 "name": "raid_bdev1", 00:08:14.801 "raid_level": "raid0", 00:08:14.801 "base_bdevs": [ 00:08:14.801 "malloc1", 00:08:14.801 "malloc2" 00:08:14.801 ], 00:08:14.801 "strip_size_kb": 64, 00:08:14.801 "superblock": false, 00:08:14.801 "method": "bdev_raid_create", 00:08:14.801 "req_id": 1 00:08:14.801 } 00:08:14.801 Got JSON-RPC error response 00:08:14.801 response: 00:08:14.801 { 00:08:14.801 "code": -17, 00:08:14.801 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:14.801 } 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.801 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 [2024-11-05 03:19:28.397264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.802 [2024-11-05 03:19:28.397469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.802 [2024-11-05 03:19:28.397543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:14.802 [2024-11-05 03:19:28.397809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.802 [2024-11-05 03:19:28.401029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.802 [2024-11-05 03:19:28.401259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.802 [2024-11-05 03:19:28.401474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:14.802 [2024-11-05 03:19:28.401691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.802 pt1 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.802 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.061 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.061 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.061 "name": "raid_bdev1", 00:08:15.061 "uuid": "aca522d6-5bca-42a6-bd0a-e0f94833549e", 00:08:15.061 "strip_size_kb": 64, 00:08:15.061 "state": "configuring", 00:08:15.061 "raid_level": "raid0", 00:08:15.061 "superblock": true, 00:08:15.061 "num_base_bdevs": 2, 00:08:15.061 "num_base_bdevs_discovered": 1, 00:08:15.061 "num_base_bdevs_operational": 2, 00:08:15.061 "base_bdevs_list": [ 00:08:15.061 { 00:08:15.061 "name": "pt1", 00:08:15.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.061 "is_configured": true, 00:08:15.061 "data_offset": 2048, 00:08:15.061 "data_size": 63488 00:08:15.061 }, 00:08:15.061 { 00:08:15.061 "name": null, 00:08:15.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.061 "is_configured": false, 00:08:15.061 "data_offset": 2048, 00:08:15.061 "data_size": 63488 00:08:15.061 } 00:08:15.061 ] 00:08:15.061 }' 00:08:15.061 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.061 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.320 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:15.320 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:15.320 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.320 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.320 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.320 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.320 [2024-11-05 03:19:28.953801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.320 [2024-11-05 03:19:28.954062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.320 [2024-11-05 03:19:28.954235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:15.320 [2024-11-05 03:19:28.954403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.320 [2024-11-05 03:19:28.955151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.320 [2024-11-05 03:19:28.955200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.320 [2024-11-05 03:19:28.955322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.320 [2024-11-05 03:19:28.955361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.320 [2024-11-05 03:19:28.955503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:15.320 [2024-11-05 03:19:28.955525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.579 pt2 00:08:15.579 [2024-11-05 03:19:28.955842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:15.579 [2024-11-05 03:19:28.956035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:15.579 [2024-11-05 03:19:28.956051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:15.579 [2024-11-05 03:19:28.956270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.579 03:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.580 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.580 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.580 03:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.580 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.580 "name": "raid_bdev1", 00:08:15.580 "uuid": "aca522d6-5bca-42a6-bd0a-e0f94833549e", 00:08:15.580 "strip_size_kb": 64, 00:08:15.580 "state": "online", 00:08:15.580 "raid_level": "raid0", 00:08:15.580 "superblock": true, 00:08:15.580 "num_base_bdevs": 2, 00:08:15.580 "num_base_bdevs_discovered": 2, 00:08:15.580 "num_base_bdevs_operational": 2, 00:08:15.580 "base_bdevs_list": [ 00:08:15.580 { 00:08:15.580 "name": "pt1", 00:08:15.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.580 "is_configured": true, 00:08:15.580 "data_offset": 2048, 00:08:15.580 "data_size": 63488 00:08:15.580 }, 00:08:15.580 { 00:08:15.580 "name": "pt2", 00:08:15.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.580 "is_configured": true, 00:08:15.580 "data_offset": 2048, 00:08:15.580 "data_size": 63488 00:08:15.580 } 00:08:15.580 ] 00:08:15.580 }' 00:08:15.580 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.580 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.152 [2024-11-05 03:19:29.498403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.152 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.152 "name": "raid_bdev1", 00:08:16.152 "aliases": [ 00:08:16.152 "aca522d6-5bca-42a6-bd0a-e0f94833549e" 00:08:16.152 ], 00:08:16.152 "product_name": "Raid Volume", 00:08:16.152 "block_size": 512, 00:08:16.152 "num_blocks": 126976, 00:08:16.152 "uuid": "aca522d6-5bca-42a6-bd0a-e0f94833549e", 00:08:16.152 "assigned_rate_limits": { 00:08:16.152 "rw_ios_per_sec": 0, 00:08:16.152 "rw_mbytes_per_sec": 0, 00:08:16.152 "r_mbytes_per_sec": 0, 00:08:16.152 "w_mbytes_per_sec": 0 00:08:16.152 }, 00:08:16.152 "claimed": false, 00:08:16.153 "zoned": false, 00:08:16.153 "supported_io_types": { 00:08:16.153 "read": true, 00:08:16.153 "write": true, 00:08:16.153 "unmap": true, 00:08:16.153 "flush": true, 00:08:16.153 "reset": true, 00:08:16.153 "nvme_admin": false, 00:08:16.153 "nvme_io": false, 00:08:16.153 "nvme_io_md": false, 00:08:16.153 "write_zeroes": true, 00:08:16.153 "zcopy": false, 00:08:16.153 "get_zone_info": false, 00:08:16.153 "zone_management": false, 00:08:16.153 "zone_append": false, 00:08:16.153 "compare": false, 00:08:16.153 "compare_and_write": false, 00:08:16.153 "abort": false, 00:08:16.153 "seek_hole": false, 00:08:16.153 "seek_data": false, 00:08:16.153 "copy": false, 00:08:16.153 "nvme_iov_md": false 00:08:16.153 }, 00:08:16.153 "memory_domains": [ 00:08:16.153 { 00:08:16.153 "dma_device_id": "system", 00:08:16.153 "dma_device_type": 1 00:08:16.153 }, 00:08:16.153 { 00:08:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.153 "dma_device_type": 2 00:08:16.153 }, 00:08:16.153 { 00:08:16.153 "dma_device_id": "system", 00:08:16.153 "dma_device_type": 1 00:08:16.153 }, 00:08:16.153 { 00:08:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.153 "dma_device_type": 2 00:08:16.153 } 00:08:16.153 ], 00:08:16.153 "driver_specific": { 00:08:16.153 "raid": { 00:08:16.153 "uuid": "aca522d6-5bca-42a6-bd0a-e0f94833549e", 00:08:16.153 "strip_size_kb": 64, 00:08:16.153 "state": "online", 00:08:16.153 "raid_level": "raid0", 00:08:16.153 "superblock": true, 00:08:16.153 "num_base_bdevs": 2, 00:08:16.153 "num_base_bdevs_discovered": 2, 00:08:16.153 "num_base_bdevs_operational": 2, 00:08:16.153 "base_bdevs_list": [ 00:08:16.154 { 00:08:16.154 "name": "pt1", 00:08:16.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.154 "is_configured": true, 00:08:16.154 "data_offset": 2048, 00:08:16.154 "data_size": 63488 00:08:16.154 }, 00:08:16.154 { 00:08:16.154 "name": "pt2", 00:08:16.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.154 "is_configured": true, 00:08:16.154 "data_offset": 2048, 00:08:16.154 "data_size": 63488 00:08:16.154 } 00:08:16.154 ] 00:08:16.154 } 00:08:16.154 } 00:08:16.154 }' 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:16.154 pt2' 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.154 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.155 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:16.155 [2024-11-05 03:19:29.786469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aca522d6-5bca-42a6-bd0a-e0f94833549e '!=' aca522d6-5bca-42a6-bd0a-e0f94833549e ']' 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60932 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60932 ']' 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60932 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60932 00:08:16.416 killing process with pid 60932 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60932' 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60932 00:08:16.416 [2024-11-05 03:19:29.871107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.416 03:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60932 00:08:16.416 [2024-11-05 03:19:29.871581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.416 [2024-11-05 03:19:29.871697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.416 [2024-11-05 03:19:29.871732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:16.675 [2024-11-05 03:19:30.067537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.610 ************************************ 00:08:17.610 END TEST raid_superblock_test 00:08:17.610 ************************************ 00:08:17.610 03:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:17.610 00:08:17.610 real 0m5.062s 00:08:17.610 user 0m7.430s 00:08:17.610 sys 0m0.736s 00:08:17.610 03:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.610 03:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.610 03:19:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:17.610 03:19:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:17.610 03:19:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.610 03:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.869 ************************************ 00:08:17.869 START TEST raid_read_error_test 00:08:17.869 ************************************ 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.869 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uZr7m3xMGG 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61144 00:08:17.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61144 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61144 ']' 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:17.870 03:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 [2024-11-05 03:19:31.376381] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:17.870 [2024-11-05 03:19:31.376584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61144 ] 00:08:18.128 [2024-11-05 03:19:31.565848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.128 [2024-11-05 03:19:31.708904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.387 [2024-11-05 03:19:31.888272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.387 [2024-11-05 03:19:31.888348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.955 BaseBdev1_malloc 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.955 true 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.955 [2024-11-05 03:19:32.442305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.955 [2024-11-05 03:19:32.442404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.955 [2024-11-05 03:19:32.442440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.955 [2024-11-05 03:19:32.442459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.955 [2024-11-05 03:19:32.445237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.955 [2024-11-05 03:19:32.445291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.955 BaseBdev1 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.955 BaseBdev2_malloc 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.955 true 00:08:18.955 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.956 [2024-11-05 03:19:32.503504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.956 [2024-11-05 03:19:32.503778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.956 [2024-11-05 03:19:32.503842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.956 [2024-11-05 03:19:32.503957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.956 [2024-11-05 03:19:32.506696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.956 [2024-11-05 03:19:32.506897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.956 BaseBdev2 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.956 [2024-11-05 03:19:32.511617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.956 [2024-11-05 03:19:32.514038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.956 [2024-11-05 03:19:32.514488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.956 [2024-11-05 03:19:32.514670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.956 [2024-11-05 03:19:32.514955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.956 [2024-11-05 03:19:32.515181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.956 [2024-11-05 03:19:32.515199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:18.956 [2024-11-05 03:19:32.515464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.956 "name": "raid_bdev1", 00:08:18.956 "uuid": "695697a2-fe91-410e-89c7-3285169eb97e", 00:08:18.956 "strip_size_kb": 64, 00:08:18.956 "state": "online", 00:08:18.956 "raid_level": "raid0", 00:08:18.956 "superblock": true, 00:08:18.956 "num_base_bdevs": 2, 00:08:18.956 "num_base_bdevs_discovered": 2, 00:08:18.956 "num_base_bdevs_operational": 2, 00:08:18.956 "base_bdevs_list": [ 00:08:18.956 { 00:08:18.956 "name": "BaseBdev1", 00:08:18.956 "uuid": "b122073d-50c0-53f3-805b-ed50d74e1f89", 00:08:18.956 "is_configured": true, 00:08:18.956 "data_offset": 2048, 00:08:18.956 "data_size": 63488 00:08:18.956 }, 00:08:18.956 { 00:08:18.956 "name": "BaseBdev2", 00:08:18.956 "uuid": "1d6f3f6a-169f-5f0e-9d5b-14682191cdaa", 00:08:18.956 "is_configured": true, 00:08:18.956 "data_offset": 2048, 00:08:18.956 "data_size": 63488 00:08:18.956 } 00:08:18.956 ] 00:08:18.956 }' 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.956 03:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.524 03:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.524 03:19:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.782 [2024-11-05 03:19:33.173111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.721 "name": "raid_bdev1", 00:08:20.721 "uuid": "695697a2-fe91-410e-89c7-3285169eb97e", 00:08:20.721 "strip_size_kb": 64, 00:08:20.721 "state": "online", 00:08:20.721 "raid_level": "raid0", 00:08:20.721 "superblock": true, 00:08:20.721 "num_base_bdevs": 2, 00:08:20.721 "num_base_bdevs_discovered": 2, 00:08:20.721 "num_base_bdevs_operational": 2, 00:08:20.721 "base_bdevs_list": [ 00:08:20.721 { 00:08:20.721 "name": "BaseBdev1", 00:08:20.721 "uuid": "b122073d-50c0-53f3-805b-ed50d74e1f89", 00:08:20.721 "is_configured": true, 00:08:20.721 "data_offset": 2048, 00:08:20.721 "data_size": 63488 00:08:20.721 }, 00:08:20.721 { 00:08:20.721 "name": "BaseBdev2", 00:08:20.721 "uuid": "1d6f3f6a-169f-5f0e-9d5b-14682191cdaa", 00:08:20.721 "is_configured": true, 00:08:20.721 "data_offset": 2048, 00:08:20.721 "data_size": 63488 00:08:20.721 } 00:08:20.721 ] 00:08:20.721 }' 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.721 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.980 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.980 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.980 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.980 [2024-11-05 03:19:34.614959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.980 [2024-11-05 03:19:34.615000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.239 [2024-11-05 03:19:34.618651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.239 [2024-11-05 03:19:34.618749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.239 [2024-11-05 03:19:34.618788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.239 [2024-11-05 03:19:34.618805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.239 { 00:08:21.239 "results": [ 00:08:21.239 { 00:08:21.239 "job": "raid_bdev1", 00:08:21.239 "core_mask": "0x1", 00:08:21.239 "workload": "randrw", 00:08:21.239 "percentage": 50, 00:08:21.239 "status": "finished", 00:08:21.239 "queue_depth": 1, 00:08:21.239 "io_size": 131072, 00:08:21.239 "runtime": 1.439553, 00:08:21.239 "iops": 12048.879061764312, 00:08:21.239 "mibps": 1506.109882720539, 00:08:21.239 "io_failed": 1, 00:08:21.239 "io_timeout": 0, 00:08:21.239 "avg_latency_us": 115.83963103885621, 00:08:21.239 "min_latency_us": 34.90909090909091, 00:08:21.239 "max_latency_us": 1995.8690909090908 00:08:21.239 } 00:08:21.239 ], 00:08:21.239 "core_count": 1 00:08:21.239 } 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61144 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61144 ']' 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61144 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61144 00:08:21.239 killing process with pid 61144 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61144' 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61144 00:08:21.239 03:19:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61144 00:08:21.239 [2024-11-05 03:19:34.656790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.239 [2024-11-05 03:19:34.760197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uZr7m3xMGG 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:08:22.176 00:08:22.176 real 0m4.484s 00:08:22.176 user 0m5.720s 00:08:22.176 sys 0m0.557s 00:08:22.176 ************************************ 00:08:22.176 END TEST raid_read_error_test 00:08:22.176 ************************************ 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.176 03:19:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.176 03:19:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:22.176 03:19:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:22.176 03:19:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.176 03:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.176 ************************************ 00:08:22.176 START TEST raid_write_error_test 00:08:22.176 ************************************ 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.176 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kFgKviejKW 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61295 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61295 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61295 ']' 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.177 03:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.435 [2024-11-05 03:19:35.909454] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:22.435 [2024-11-05 03:19:35.910010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61295 ] 00:08:22.693 [2024-11-05 03:19:36.091805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.693 [2024-11-05 03:19:36.204578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.952 [2024-11-05 03:19:36.393788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.952 [2024-11-05 03:19:36.394089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.520 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.520 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:23.520 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.520 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.520 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.520 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.520 BaseBdev1_malloc 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.521 true 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.521 [2024-11-05 03:19:36.944888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.521 [2024-11-05 03:19:36.945145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.521 [2024-11-05 03:19:36.945187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:23.521 [2024-11-05 03:19:36.945206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.521 [2024-11-05 03:19:36.948237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.521 [2024-11-05 03:19:36.948480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.521 BaseBdev1 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.521 BaseBdev2_malloc 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.521 true 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.521 03:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.521 [2024-11-05 03:19:36.997476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.521 [2024-11-05 03:19:36.997773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.521 [2024-11-05 03:19:36.997808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:23.521 [2024-11-05 03:19:36.997826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.521 [2024-11-05 03:19:37.000763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.521 [2024-11-05 03:19:37.000987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.521 BaseBdev2 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.521 [2024-11-05 03:19:37.005695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.521 [2024-11-05 03:19:37.008326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.521 [2024-11-05 03:19:37.008611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.521 [2024-11-05 03:19:37.008636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:23.521 [2024-11-05 03:19:37.008930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:23.521 [2024-11-05 03:19:37.009136] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.521 [2024-11-05 03:19:37.009155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:23.521 [2024-11-05 03:19:37.009377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.521 "name": "raid_bdev1", 00:08:23.521 "uuid": "73e5bb2e-2ddd-4100-a4e3-92df01c3f6c8", 00:08:23.521 "strip_size_kb": 64, 00:08:23.521 "state": "online", 00:08:23.521 "raid_level": "raid0", 00:08:23.521 "superblock": true, 00:08:23.521 "num_base_bdevs": 2, 00:08:23.521 "num_base_bdevs_discovered": 2, 00:08:23.521 "num_base_bdevs_operational": 2, 00:08:23.521 "base_bdevs_list": [ 00:08:23.521 { 00:08:23.521 "name": "BaseBdev1", 00:08:23.521 "uuid": "01596a20-783c-59c0-9198-9fbb61fb592e", 00:08:23.521 "is_configured": true, 00:08:23.521 "data_offset": 2048, 00:08:23.521 "data_size": 63488 00:08:23.521 }, 00:08:23.521 { 00:08:23.521 "name": "BaseBdev2", 00:08:23.521 "uuid": "59f9ded5-4194-5627-bd0c-7496e6d48db1", 00:08:23.521 "is_configured": true, 00:08:23.521 "data_offset": 2048, 00:08:23.521 "data_size": 63488 00:08:23.521 } 00:08:23.521 ] 00:08:23.521 }' 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.521 03:19:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.089 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:24.089 03:19:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:24.089 [2024-11-05 03:19:37.663411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.026 "name": "raid_bdev1", 00:08:25.026 "uuid": "73e5bb2e-2ddd-4100-a4e3-92df01c3f6c8", 00:08:25.026 "strip_size_kb": 64, 00:08:25.026 "state": "online", 00:08:25.026 "raid_level": "raid0", 00:08:25.026 "superblock": true, 00:08:25.026 "num_base_bdevs": 2, 00:08:25.026 "num_base_bdevs_discovered": 2, 00:08:25.026 "num_base_bdevs_operational": 2, 00:08:25.026 "base_bdevs_list": [ 00:08:25.026 { 00:08:25.026 "name": "BaseBdev1", 00:08:25.026 "uuid": "01596a20-783c-59c0-9198-9fbb61fb592e", 00:08:25.026 "is_configured": true, 00:08:25.026 "data_offset": 2048, 00:08:25.026 "data_size": 63488 00:08:25.026 }, 00:08:25.026 { 00:08:25.026 "name": "BaseBdev2", 00:08:25.026 "uuid": "59f9ded5-4194-5627-bd0c-7496e6d48db1", 00:08:25.026 "is_configured": true, 00:08:25.026 "data_offset": 2048, 00:08:25.026 "data_size": 63488 00:08:25.026 } 00:08:25.026 ] 00:08:25.026 }' 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.026 03:19:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.594 [2024-11-05 03:19:39.097243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.594 [2024-11-05 03:19:39.097501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.594 { 00:08:25.594 "results": [ 00:08:25.594 { 00:08:25.594 "job": "raid_bdev1", 00:08:25.594 "core_mask": "0x1", 00:08:25.594 "workload": "randrw", 00:08:25.594 "percentage": 50, 00:08:25.594 "status": "finished", 00:08:25.594 "queue_depth": 1, 00:08:25.594 "io_size": 131072, 00:08:25.594 "runtime": 1.431079, 00:08:25.594 "iops": 12165.64564220424, 00:08:25.594 "mibps": 1520.70570527553, 00:08:25.594 "io_failed": 1, 00:08:25.594 "io_timeout": 0, 00:08:25.594 "avg_latency_us": 114.63730285451727, 00:08:25.594 "min_latency_us": 36.07272727272727, 00:08:25.594 "max_latency_us": 1750.1090909090908 00:08:25.594 } 00:08:25.594 ], 00:08:25.594 "core_count": 1 00:08:25.594 } 00:08:25.594 [2024-11-05 03:19:39.100994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.594 [2024-11-05 03:19:39.101043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.594 [2024-11-05 03:19:39.101082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.594 [2024-11-05 03:19:39.101098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61295 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61295 ']' 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61295 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61295 00:08:25.594 killing process with pid 61295 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61295' 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61295 00:08:25.594 [2024-11-05 03:19:39.137412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.594 03:19:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61295 00:08:25.853 [2024-11-05 03:19:39.241858] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kFgKviejKW 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:26.792 00:08:26.792 real 0m4.516s 00:08:26.792 user 0m5.688s 00:08:26.792 sys 0m0.568s 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.792 03:19:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.792 ************************************ 00:08:26.792 END TEST raid_write_error_test 00:08:26.792 ************************************ 00:08:26.792 03:19:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:26.792 03:19:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:26.792 03:19:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:26.792 03:19:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.792 03:19:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.792 ************************************ 00:08:26.792 START TEST raid_state_function_test 00:08:26.792 ************************************ 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61433 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61433' 00:08:26.792 Process raid pid: 61433 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61433 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61433 ']' 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.792 03:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.051 [2024-11-05 03:19:40.477889] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:27.051 [2024-11-05 03:19:40.478342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.051 [2024-11-05 03:19:40.661997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.310 [2024-11-05 03:19:40.788578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.569 [2024-11-05 03:19:40.983436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.569 [2024-11-05 03:19:40.983491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.137 [2024-11-05 03:19:41.471950] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.137 [2024-11-05 03:19:41.472221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.137 [2024-11-05 03:19:41.472250] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.137 [2024-11-05 03:19:41.472268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.137 "name": "Existed_Raid", 00:08:28.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.137 "strip_size_kb": 64, 00:08:28.137 "state": "configuring", 00:08:28.137 "raid_level": "concat", 00:08:28.137 "superblock": false, 00:08:28.137 "num_base_bdevs": 2, 00:08:28.137 "num_base_bdevs_discovered": 0, 00:08:28.137 "num_base_bdevs_operational": 2, 00:08:28.137 "base_bdevs_list": [ 00:08:28.137 { 00:08:28.137 "name": "BaseBdev1", 00:08:28.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.137 "is_configured": false, 00:08:28.137 "data_offset": 0, 00:08:28.137 "data_size": 0 00:08:28.137 }, 00:08:28.137 { 00:08:28.137 "name": "BaseBdev2", 00:08:28.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.137 "is_configured": false, 00:08:28.137 "data_offset": 0, 00:08:28.137 "data_size": 0 00:08:28.137 } 00:08:28.137 ] 00:08:28.137 }' 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.137 03:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.397 [2024-11-05 03:19:42.008083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.397 [2024-11-05 03:19:42.008320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.397 [2024-11-05 03:19:42.020041] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.397 [2024-11-05 03:19:42.020324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.397 [2024-11-05 03:19:42.020428] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.397 [2024-11-05 03:19:42.020453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.397 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.656 [2024-11-05 03:19:42.066720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.656 BaseBdev1 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.656 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.656 [ 00:08:28.656 { 00:08:28.656 "name": "BaseBdev1", 00:08:28.656 "aliases": [ 00:08:28.656 "2c635294-7253-42ed-a0f4-791d455f5dff" 00:08:28.656 ], 00:08:28.656 "product_name": "Malloc disk", 00:08:28.657 "block_size": 512, 00:08:28.657 "num_blocks": 65536, 00:08:28.657 "uuid": "2c635294-7253-42ed-a0f4-791d455f5dff", 00:08:28.657 "assigned_rate_limits": { 00:08:28.657 "rw_ios_per_sec": 0, 00:08:28.657 "rw_mbytes_per_sec": 0, 00:08:28.657 "r_mbytes_per_sec": 0, 00:08:28.657 "w_mbytes_per_sec": 0 00:08:28.657 }, 00:08:28.657 "claimed": true, 00:08:28.657 "claim_type": "exclusive_write", 00:08:28.657 "zoned": false, 00:08:28.657 "supported_io_types": { 00:08:28.657 "read": true, 00:08:28.657 "write": true, 00:08:28.657 "unmap": true, 00:08:28.657 "flush": true, 00:08:28.657 "reset": true, 00:08:28.657 "nvme_admin": false, 00:08:28.657 "nvme_io": false, 00:08:28.657 "nvme_io_md": false, 00:08:28.657 "write_zeroes": true, 00:08:28.657 "zcopy": true, 00:08:28.657 "get_zone_info": false, 00:08:28.657 "zone_management": false, 00:08:28.657 "zone_append": false, 00:08:28.657 "compare": false, 00:08:28.657 "compare_and_write": false, 00:08:28.657 "abort": true, 00:08:28.657 "seek_hole": false, 00:08:28.657 "seek_data": false, 00:08:28.657 "copy": true, 00:08:28.657 "nvme_iov_md": false 00:08:28.657 }, 00:08:28.657 "memory_domains": [ 00:08:28.657 { 00:08:28.657 "dma_device_id": "system", 00:08:28.657 "dma_device_type": 1 00:08:28.657 }, 00:08:28.657 { 00:08:28.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.657 "dma_device_type": 2 00:08:28.657 } 00:08:28.657 ], 00:08:28.657 "driver_specific": {} 00:08:28.657 } 00:08:28.657 ] 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.657 "name": "Existed_Raid", 00:08:28.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.657 "strip_size_kb": 64, 00:08:28.657 "state": "configuring", 00:08:28.657 "raid_level": "concat", 00:08:28.657 "superblock": false, 00:08:28.657 "num_base_bdevs": 2, 00:08:28.657 "num_base_bdevs_discovered": 1, 00:08:28.657 "num_base_bdevs_operational": 2, 00:08:28.657 "base_bdevs_list": [ 00:08:28.657 { 00:08:28.657 "name": "BaseBdev1", 00:08:28.657 "uuid": "2c635294-7253-42ed-a0f4-791d455f5dff", 00:08:28.657 "is_configured": true, 00:08:28.657 "data_offset": 0, 00:08:28.657 "data_size": 65536 00:08:28.657 }, 00:08:28.657 { 00:08:28.657 "name": "BaseBdev2", 00:08:28.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.657 "is_configured": false, 00:08:28.657 "data_offset": 0, 00:08:28.657 "data_size": 0 00:08:28.657 } 00:08:28.657 ] 00:08:28.657 }' 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.657 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.225 [2024-11-05 03:19:42.630985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.225 [2024-11-05 03:19:42.631203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.225 [2024-11-05 03:19:42.639033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.225 [2024-11-05 03:19:42.641711] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.225 [2024-11-05 03:19:42.641958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.225 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.226 "name": "Existed_Raid", 00:08:29.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.226 "strip_size_kb": 64, 00:08:29.226 "state": "configuring", 00:08:29.226 "raid_level": "concat", 00:08:29.226 "superblock": false, 00:08:29.226 "num_base_bdevs": 2, 00:08:29.226 "num_base_bdevs_discovered": 1, 00:08:29.226 "num_base_bdevs_operational": 2, 00:08:29.226 "base_bdevs_list": [ 00:08:29.226 { 00:08:29.226 "name": "BaseBdev1", 00:08:29.226 "uuid": "2c635294-7253-42ed-a0f4-791d455f5dff", 00:08:29.226 "is_configured": true, 00:08:29.226 "data_offset": 0, 00:08:29.226 "data_size": 65536 00:08:29.226 }, 00:08:29.226 { 00:08:29.226 "name": "BaseBdev2", 00:08:29.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.226 "is_configured": false, 00:08:29.226 "data_offset": 0, 00:08:29.226 "data_size": 0 00:08:29.226 } 00:08:29.226 ] 00:08:29.226 }' 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.226 03:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 [2024-11-05 03:19:43.194789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.794 [2024-11-05 03:19:43.194853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.794 [2024-11-05 03:19:43.194865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:29.794 [2024-11-05 03:19:43.195171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:29.794 [2024-11-05 03:19:43.195417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.794 [2024-11-05 03:19:43.195440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:29.794 BaseBdev2 00:08:29.794 [2024-11-05 03:19:43.195786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 [ 00:08:29.794 { 00:08:29.794 "name": "BaseBdev2", 00:08:29.794 "aliases": [ 00:08:29.794 "9b7d1838-7122-4950-9ac4-5c347b564383" 00:08:29.794 ], 00:08:29.794 "product_name": "Malloc disk", 00:08:29.794 "block_size": 512, 00:08:29.794 "num_blocks": 65536, 00:08:29.794 "uuid": "9b7d1838-7122-4950-9ac4-5c347b564383", 00:08:29.794 "assigned_rate_limits": { 00:08:29.794 "rw_ios_per_sec": 0, 00:08:29.794 "rw_mbytes_per_sec": 0, 00:08:29.794 "r_mbytes_per_sec": 0, 00:08:29.794 "w_mbytes_per_sec": 0 00:08:29.794 }, 00:08:29.794 "claimed": true, 00:08:29.794 "claim_type": "exclusive_write", 00:08:29.794 "zoned": false, 00:08:29.794 "supported_io_types": { 00:08:29.794 "read": true, 00:08:29.794 "write": true, 00:08:29.794 "unmap": true, 00:08:29.794 "flush": true, 00:08:29.794 "reset": true, 00:08:29.794 "nvme_admin": false, 00:08:29.794 "nvme_io": false, 00:08:29.794 "nvme_io_md": false, 00:08:29.794 "write_zeroes": true, 00:08:29.794 "zcopy": true, 00:08:29.794 "get_zone_info": false, 00:08:29.794 "zone_management": false, 00:08:29.794 "zone_append": false, 00:08:29.794 "compare": false, 00:08:29.794 "compare_and_write": false, 00:08:29.794 "abort": true, 00:08:29.794 "seek_hole": false, 00:08:29.794 "seek_data": false, 00:08:29.794 "copy": true, 00:08:29.794 "nvme_iov_md": false 00:08:29.794 }, 00:08:29.794 "memory_domains": [ 00:08:29.794 { 00:08:29.794 "dma_device_id": "system", 00:08:29.794 "dma_device_type": 1 00:08:29.794 }, 00:08:29.794 { 00:08:29.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.794 "dma_device_type": 2 00:08:29.794 } 00:08:29.794 ], 00:08:29.794 "driver_specific": {} 00:08:29.794 } 00:08:29.794 ] 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.794 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.795 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.795 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.795 "name": "Existed_Raid", 00:08:29.795 "uuid": "2c1382a8-d8f4-42e3-80a3-94ed51d58ecf", 00:08:29.795 "strip_size_kb": 64, 00:08:29.795 "state": "online", 00:08:29.795 "raid_level": "concat", 00:08:29.795 "superblock": false, 00:08:29.795 "num_base_bdevs": 2, 00:08:29.795 "num_base_bdevs_discovered": 2, 00:08:29.795 "num_base_bdevs_operational": 2, 00:08:29.795 "base_bdevs_list": [ 00:08:29.795 { 00:08:29.795 "name": "BaseBdev1", 00:08:29.795 "uuid": "2c635294-7253-42ed-a0f4-791d455f5dff", 00:08:29.795 "is_configured": true, 00:08:29.795 "data_offset": 0, 00:08:29.795 "data_size": 65536 00:08:29.795 }, 00:08:29.795 { 00:08:29.795 "name": "BaseBdev2", 00:08:29.795 "uuid": "9b7d1838-7122-4950-9ac4-5c347b564383", 00:08:29.795 "is_configured": true, 00:08:29.795 "data_offset": 0, 00:08:29.795 "data_size": 65536 00:08:29.795 } 00:08:29.795 ] 00:08:29.795 }' 00:08:29.795 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.795 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.363 [2024-11-05 03:19:43.759322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.363 "name": "Existed_Raid", 00:08:30.363 "aliases": [ 00:08:30.363 "2c1382a8-d8f4-42e3-80a3-94ed51d58ecf" 00:08:30.363 ], 00:08:30.363 "product_name": "Raid Volume", 00:08:30.363 "block_size": 512, 00:08:30.363 "num_blocks": 131072, 00:08:30.363 "uuid": "2c1382a8-d8f4-42e3-80a3-94ed51d58ecf", 00:08:30.363 "assigned_rate_limits": { 00:08:30.363 "rw_ios_per_sec": 0, 00:08:30.363 "rw_mbytes_per_sec": 0, 00:08:30.363 "r_mbytes_per_sec": 0, 00:08:30.363 "w_mbytes_per_sec": 0 00:08:30.363 }, 00:08:30.363 "claimed": false, 00:08:30.363 "zoned": false, 00:08:30.363 "supported_io_types": { 00:08:30.363 "read": true, 00:08:30.363 "write": true, 00:08:30.363 "unmap": true, 00:08:30.363 "flush": true, 00:08:30.363 "reset": true, 00:08:30.363 "nvme_admin": false, 00:08:30.363 "nvme_io": false, 00:08:30.363 "nvme_io_md": false, 00:08:30.363 "write_zeroes": true, 00:08:30.363 "zcopy": false, 00:08:30.363 "get_zone_info": false, 00:08:30.363 "zone_management": false, 00:08:30.363 "zone_append": false, 00:08:30.363 "compare": false, 00:08:30.363 "compare_and_write": false, 00:08:30.363 "abort": false, 00:08:30.363 "seek_hole": false, 00:08:30.363 "seek_data": false, 00:08:30.363 "copy": false, 00:08:30.363 "nvme_iov_md": false 00:08:30.363 }, 00:08:30.363 "memory_domains": [ 00:08:30.363 { 00:08:30.363 "dma_device_id": "system", 00:08:30.363 "dma_device_type": 1 00:08:30.363 }, 00:08:30.363 { 00:08:30.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.363 "dma_device_type": 2 00:08:30.363 }, 00:08:30.363 { 00:08:30.363 "dma_device_id": "system", 00:08:30.363 "dma_device_type": 1 00:08:30.363 }, 00:08:30.363 { 00:08:30.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.363 "dma_device_type": 2 00:08:30.363 } 00:08:30.363 ], 00:08:30.363 "driver_specific": { 00:08:30.363 "raid": { 00:08:30.363 "uuid": "2c1382a8-d8f4-42e3-80a3-94ed51d58ecf", 00:08:30.363 "strip_size_kb": 64, 00:08:30.363 "state": "online", 00:08:30.363 "raid_level": "concat", 00:08:30.363 "superblock": false, 00:08:30.363 "num_base_bdevs": 2, 00:08:30.363 "num_base_bdevs_discovered": 2, 00:08:30.363 "num_base_bdevs_operational": 2, 00:08:30.363 "base_bdevs_list": [ 00:08:30.363 { 00:08:30.363 "name": "BaseBdev1", 00:08:30.363 "uuid": "2c635294-7253-42ed-a0f4-791d455f5dff", 00:08:30.363 "is_configured": true, 00:08:30.363 "data_offset": 0, 00:08:30.363 "data_size": 65536 00:08:30.363 }, 00:08:30.363 { 00:08:30.363 "name": "BaseBdev2", 00:08:30.363 "uuid": "9b7d1838-7122-4950-9ac4-5c347b564383", 00:08:30.363 "is_configured": true, 00:08:30.363 "data_offset": 0, 00:08:30.363 "data_size": 65536 00:08:30.363 } 00:08:30.363 ] 00:08:30.363 } 00:08:30.363 } 00:08:30.363 }' 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:30.363 BaseBdev2' 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.363 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.364 03:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.623 [2024-11-05 03:19:44.027077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.623 [2024-11-05 03:19:44.027278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.623 [2024-11-05 03:19:44.027497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.623 "name": "Existed_Raid", 00:08:30.623 "uuid": "2c1382a8-d8f4-42e3-80a3-94ed51d58ecf", 00:08:30.623 "strip_size_kb": 64, 00:08:30.623 "state": "offline", 00:08:30.623 "raid_level": "concat", 00:08:30.623 "superblock": false, 00:08:30.623 "num_base_bdevs": 2, 00:08:30.623 "num_base_bdevs_discovered": 1, 00:08:30.623 "num_base_bdevs_operational": 1, 00:08:30.623 "base_bdevs_list": [ 00:08:30.623 { 00:08:30.623 "name": null, 00:08:30.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.623 "is_configured": false, 00:08:30.623 "data_offset": 0, 00:08:30.623 "data_size": 65536 00:08:30.623 }, 00:08:30.623 { 00:08:30.623 "name": "BaseBdev2", 00:08:30.623 "uuid": "9b7d1838-7122-4950-9ac4-5c347b564383", 00:08:30.623 "is_configured": true, 00:08:30.623 "data_offset": 0, 00:08:30.623 "data_size": 65536 00:08:30.623 } 00:08:30.623 ] 00:08:30.623 }' 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.623 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.192 [2024-11-05 03:19:44.713073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.192 [2024-11-05 03:19:44.713304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.192 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61433 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61433 ']' 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61433 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61433 00:08:31.451 killing process with pid 61433 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61433' 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61433 00:08:31.451 [2024-11-05 03:19:44.925856] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.451 03:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61433 00:08:31.451 [2024-11-05 03:19:44.940757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:32.388 00:08:32.388 real 0m5.525s 00:08:32.388 user 0m8.468s 00:08:32.388 sys 0m0.771s 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.388 ************************************ 00:08:32.388 END TEST raid_state_function_test 00:08:32.388 ************************************ 00:08:32.388 03:19:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:32.388 03:19:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:32.388 03:19:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.388 03:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.388 ************************************ 00:08:32.388 START TEST raid_state_function_test_sb 00:08:32.388 ************************************ 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.388 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:32.389 Process raid pid: 61692 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61692 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61692' 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61692 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61692 ']' 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.389 03:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.648 [2024-11-05 03:19:46.027603] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:32.648 [2024-11-05 03:19:46.027764] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.648 [2024-11-05 03:19:46.199318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.907 [2024-11-05 03:19:46.317390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.907 [2024-11-05 03:19:46.497078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.907 [2024-11-05 03:19:46.497119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.475 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.475 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:33.475 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:33.475 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.475 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.475 [2024-11-05 03:19:47.024447] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.475 [2024-11-05 03:19:47.024722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.476 [2024-11-05 03:19:47.024842] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.476 [2024-11-05 03:19:47.024875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.476 "name": "Existed_Raid", 00:08:33.476 "uuid": "d7a33419-fa0c-4129-9069-fce69408e51f", 00:08:33.476 "strip_size_kb": 64, 00:08:33.476 "state": "configuring", 00:08:33.476 "raid_level": "concat", 00:08:33.476 "superblock": true, 00:08:33.476 "num_base_bdevs": 2, 00:08:33.476 "num_base_bdevs_discovered": 0, 00:08:33.476 "num_base_bdevs_operational": 2, 00:08:33.476 "base_bdevs_list": [ 00:08:33.476 { 00:08:33.476 "name": "BaseBdev1", 00:08:33.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.476 "is_configured": false, 00:08:33.476 "data_offset": 0, 00:08:33.476 "data_size": 0 00:08:33.476 }, 00:08:33.476 { 00:08:33.476 "name": "BaseBdev2", 00:08:33.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.476 "is_configured": false, 00:08:33.476 "data_offset": 0, 00:08:33.476 "data_size": 0 00:08:33.476 } 00:08:33.476 ] 00:08:33.476 }' 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.476 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 [2024-11-05 03:19:47.548521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.044 [2024-11-05 03:19:47.548705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 [2024-11-05 03:19:47.556560] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.044 [2024-11-05 03:19:47.556780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.044 [2024-11-05 03:19:47.556908] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.044 [2024-11-05 03:19:47.556969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 BaseBdev1 00:08:34.044 [2024-11-05 03:19:47.598357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 [ 00:08:34.044 { 00:08:34.044 "name": "BaseBdev1", 00:08:34.044 "aliases": [ 00:08:34.044 "a572f7ab-4dc6-4cf0-9daf-8e1dded15082" 00:08:34.044 ], 00:08:34.044 "product_name": "Malloc disk", 00:08:34.044 "block_size": 512, 00:08:34.044 "num_blocks": 65536, 00:08:34.044 "uuid": "a572f7ab-4dc6-4cf0-9daf-8e1dded15082", 00:08:34.044 "assigned_rate_limits": { 00:08:34.044 "rw_ios_per_sec": 0, 00:08:34.044 "rw_mbytes_per_sec": 0, 00:08:34.044 "r_mbytes_per_sec": 0, 00:08:34.044 "w_mbytes_per_sec": 0 00:08:34.044 }, 00:08:34.044 "claimed": true, 00:08:34.044 "claim_type": "exclusive_write", 00:08:34.044 "zoned": false, 00:08:34.044 "supported_io_types": { 00:08:34.044 "read": true, 00:08:34.044 "write": true, 00:08:34.044 "unmap": true, 00:08:34.044 "flush": true, 00:08:34.044 "reset": true, 00:08:34.044 "nvme_admin": false, 00:08:34.044 "nvme_io": false, 00:08:34.044 "nvme_io_md": false, 00:08:34.044 "write_zeroes": true, 00:08:34.044 "zcopy": true, 00:08:34.044 "get_zone_info": false, 00:08:34.044 "zone_management": false, 00:08:34.044 "zone_append": false, 00:08:34.044 "compare": false, 00:08:34.044 "compare_and_write": false, 00:08:34.044 "abort": true, 00:08:34.044 "seek_hole": false, 00:08:34.044 "seek_data": false, 00:08:34.044 "copy": true, 00:08:34.044 "nvme_iov_md": false 00:08:34.044 }, 00:08:34.044 "memory_domains": [ 00:08:34.044 { 00:08:34.044 "dma_device_id": "system", 00:08:34.044 "dma_device_type": 1 00:08:34.044 }, 00:08:34.044 { 00:08:34.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.044 "dma_device_type": 2 00:08:34.044 } 00:08:34.044 ], 00:08:34.044 "driver_specific": {} 00:08:34.044 } 00:08:34.044 ] 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.044 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.045 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.304 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.304 "name": "Existed_Raid", 00:08:34.304 "uuid": "775793be-085a-4b9a-9d07-3ef8139d21f5", 00:08:34.304 "strip_size_kb": 64, 00:08:34.304 "state": "configuring", 00:08:34.304 "raid_level": "concat", 00:08:34.304 "superblock": true, 00:08:34.304 "num_base_bdevs": 2, 00:08:34.304 "num_base_bdevs_discovered": 1, 00:08:34.304 "num_base_bdevs_operational": 2, 00:08:34.304 "base_bdevs_list": [ 00:08:34.304 { 00:08:34.304 "name": "BaseBdev1", 00:08:34.304 "uuid": "a572f7ab-4dc6-4cf0-9daf-8e1dded15082", 00:08:34.304 "is_configured": true, 00:08:34.304 "data_offset": 2048, 00:08:34.304 "data_size": 63488 00:08:34.304 }, 00:08:34.304 { 00:08:34.304 "name": "BaseBdev2", 00:08:34.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.304 "is_configured": false, 00:08:34.304 "data_offset": 0, 00:08:34.304 "data_size": 0 00:08:34.304 } 00:08:34.304 ] 00:08:34.304 }' 00:08:34.304 03:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.304 03:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.563 [2024-11-05 03:19:48.150750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.563 [2024-11-05 03:19:48.150958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.563 [2024-11-05 03:19:48.158786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.563 [2024-11-05 03:19:48.161489] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.563 [2024-11-05 03:19:48.161722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.563 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.822 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.822 "name": "Existed_Raid", 00:08:34.822 "uuid": "5b2ac880-8552-4c9a-93f2-c61bab8dd15c", 00:08:34.822 "strip_size_kb": 64, 00:08:34.822 "state": "configuring", 00:08:34.822 "raid_level": "concat", 00:08:34.822 "superblock": true, 00:08:34.822 "num_base_bdevs": 2, 00:08:34.822 "num_base_bdevs_discovered": 1, 00:08:34.822 "num_base_bdevs_operational": 2, 00:08:34.822 "base_bdevs_list": [ 00:08:34.822 { 00:08:34.822 "name": "BaseBdev1", 00:08:34.822 "uuid": "a572f7ab-4dc6-4cf0-9daf-8e1dded15082", 00:08:34.822 "is_configured": true, 00:08:34.822 "data_offset": 2048, 00:08:34.822 "data_size": 63488 00:08:34.822 }, 00:08:34.822 { 00:08:34.822 "name": "BaseBdev2", 00:08:34.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.822 "is_configured": false, 00:08:34.822 "data_offset": 0, 00:08:34.822 "data_size": 0 00:08:34.822 } 00:08:34.822 ] 00:08:34.822 }' 00:08:34.822 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.822 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.082 [2024-11-05 03:19:48.713932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.082 BaseBdev2 00:08:35.082 [2024-11-05 03:19:48.714505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.082 [2024-11-05 03:19:48.714535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:35.082 [2024-11-05 03:19:48.714988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:35.082 [2024-11-05 03:19:48.715162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.082 [2024-11-05 03:19:48.715181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:35.082 [2024-11-05 03:19:48.715339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.082 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.341 [ 00:08:35.341 { 00:08:35.341 "name": "BaseBdev2", 00:08:35.341 "aliases": [ 00:08:35.341 "a3f76f8b-de1c-4ff5-80cd-3cfb53c23cba" 00:08:35.341 ], 00:08:35.341 "product_name": "Malloc disk", 00:08:35.341 "block_size": 512, 00:08:35.341 "num_blocks": 65536, 00:08:35.341 "uuid": "a3f76f8b-de1c-4ff5-80cd-3cfb53c23cba", 00:08:35.341 "assigned_rate_limits": { 00:08:35.341 "rw_ios_per_sec": 0, 00:08:35.341 "rw_mbytes_per_sec": 0, 00:08:35.341 "r_mbytes_per_sec": 0, 00:08:35.341 "w_mbytes_per_sec": 0 00:08:35.341 }, 00:08:35.341 "claimed": true, 00:08:35.341 "claim_type": "exclusive_write", 00:08:35.341 "zoned": false, 00:08:35.341 "supported_io_types": { 00:08:35.341 "read": true, 00:08:35.341 "write": true, 00:08:35.341 "unmap": true, 00:08:35.341 "flush": true, 00:08:35.341 "reset": true, 00:08:35.341 "nvme_admin": false, 00:08:35.341 "nvme_io": false, 00:08:35.341 "nvme_io_md": false, 00:08:35.341 "write_zeroes": true, 00:08:35.341 "zcopy": true, 00:08:35.341 "get_zone_info": false, 00:08:35.341 "zone_management": false, 00:08:35.341 "zone_append": false, 00:08:35.341 "compare": false, 00:08:35.341 "compare_and_write": false, 00:08:35.341 "abort": true, 00:08:35.341 "seek_hole": false, 00:08:35.341 "seek_data": false, 00:08:35.341 "copy": true, 00:08:35.341 "nvme_iov_md": false 00:08:35.341 }, 00:08:35.341 "memory_domains": [ 00:08:35.341 { 00:08:35.341 "dma_device_id": "system", 00:08:35.341 "dma_device_type": 1 00:08:35.341 }, 00:08:35.341 { 00:08:35.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.341 "dma_device_type": 2 00:08:35.341 } 00:08:35.341 ], 00:08:35.341 "driver_specific": {} 00:08:35.341 } 00:08:35.341 ] 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.341 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.342 "name": "Existed_Raid", 00:08:35.342 "uuid": "5b2ac880-8552-4c9a-93f2-c61bab8dd15c", 00:08:35.342 "strip_size_kb": 64, 00:08:35.342 "state": "online", 00:08:35.342 "raid_level": "concat", 00:08:35.342 "superblock": true, 00:08:35.342 "num_base_bdevs": 2, 00:08:35.342 "num_base_bdevs_discovered": 2, 00:08:35.342 "num_base_bdevs_operational": 2, 00:08:35.342 "base_bdevs_list": [ 00:08:35.342 { 00:08:35.342 "name": "BaseBdev1", 00:08:35.342 "uuid": "a572f7ab-4dc6-4cf0-9daf-8e1dded15082", 00:08:35.342 "is_configured": true, 00:08:35.342 "data_offset": 2048, 00:08:35.342 "data_size": 63488 00:08:35.342 }, 00:08:35.342 { 00:08:35.342 "name": "BaseBdev2", 00:08:35.342 "uuid": "a3f76f8b-de1c-4ff5-80cd-3cfb53c23cba", 00:08:35.342 "is_configured": true, 00:08:35.342 "data_offset": 2048, 00:08:35.342 "data_size": 63488 00:08:35.342 } 00:08:35.342 ] 00:08:35.342 }' 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.342 03:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.910 [2024-11-05 03:19:49.286709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.910 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.910 "name": "Existed_Raid", 00:08:35.910 "aliases": [ 00:08:35.910 "5b2ac880-8552-4c9a-93f2-c61bab8dd15c" 00:08:35.910 ], 00:08:35.910 "product_name": "Raid Volume", 00:08:35.910 "block_size": 512, 00:08:35.910 "num_blocks": 126976, 00:08:35.910 "uuid": "5b2ac880-8552-4c9a-93f2-c61bab8dd15c", 00:08:35.910 "assigned_rate_limits": { 00:08:35.910 "rw_ios_per_sec": 0, 00:08:35.910 "rw_mbytes_per_sec": 0, 00:08:35.910 "r_mbytes_per_sec": 0, 00:08:35.910 "w_mbytes_per_sec": 0 00:08:35.911 }, 00:08:35.911 "claimed": false, 00:08:35.911 "zoned": false, 00:08:35.911 "supported_io_types": { 00:08:35.911 "read": true, 00:08:35.911 "write": true, 00:08:35.911 "unmap": true, 00:08:35.911 "flush": true, 00:08:35.911 "reset": true, 00:08:35.911 "nvme_admin": false, 00:08:35.911 "nvme_io": false, 00:08:35.911 "nvme_io_md": false, 00:08:35.911 "write_zeroes": true, 00:08:35.911 "zcopy": false, 00:08:35.911 "get_zone_info": false, 00:08:35.911 "zone_management": false, 00:08:35.911 "zone_append": false, 00:08:35.911 "compare": false, 00:08:35.911 "compare_and_write": false, 00:08:35.911 "abort": false, 00:08:35.911 "seek_hole": false, 00:08:35.911 "seek_data": false, 00:08:35.911 "copy": false, 00:08:35.911 "nvme_iov_md": false 00:08:35.911 }, 00:08:35.911 "memory_domains": [ 00:08:35.911 { 00:08:35.911 "dma_device_id": "system", 00:08:35.911 "dma_device_type": 1 00:08:35.911 }, 00:08:35.911 { 00:08:35.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.911 "dma_device_type": 2 00:08:35.911 }, 00:08:35.911 { 00:08:35.911 "dma_device_id": "system", 00:08:35.911 "dma_device_type": 1 00:08:35.911 }, 00:08:35.911 { 00:08:35.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.911 "dma_device_type": 2 00:08:35.911 } 00:08:35.911 ], 00:08:35.911 "driver_specific": { 00:08:35.911 "raid": { 00:08:35.911 "uuid": "5b2ac880-8552-4c9a-93f2-c61bab8dd15c", 00:08:35.911 "strip_size_kb": 64, 00:08:35.911 "state": "online", 00:08:35.911 "raid_level": "concat", 00:08:35.911 "superblock": true, 00:08:35.911 "num_base_bdevs": 2, 00:08:35.911 "num_base_bdevs_discovered": 2, 00:08:35.911 "num_base_bdevs_operational": 2, 00:08:35.911 "base_bdevs_list": [ 00:08:35.911 { 00:08:35.911 "name": "BaseBdev1", 00:08:35.911 "uuid": "a572f7ab-4dc6-4cf0-9daf-8e1dded15082", 00:08:35.911 "is_configured": true, 00:08:35.911 "data_offset": 2048, 00:08:35.911 "data_size": 63488 00:08:35.911 }, 00:08:35.911 { 00:08:35.911 "name": "BaseBdev2", 00:08:35.911 "uuid": "a3f76f8b-de1c-4ff5-80cd-3cfb53c23cba", 00:08:35.911 "is_configured": true, 00:08:35.911 "data_offset": 2048, 00:08:35.911 "data_size": 63488 00:08:35.911 } 00:08:35.911 ] 00:08:35.911 } 00:08:35.911 } 00:08:35.911 }' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.911 BaseBdev2' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.911 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.175 [2024-11-05 03:19:49.550413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.175 [2024-11-05 03:19:49.550618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.175 [2024-11-05 03:19:49.550841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.175 "name": "Existed_Raid", 00:08:36.175 "uuid": "5b2ac880-8552-4c9a-93f2-c61bab8dd15c", 00:08:36.175 "strip_size_kb": 64, 00:08:36.175 "state": "offline", 00:08:36.175 "raid_level": "concat", 00:08:36.175 "superblock": true, 00:08:36.175 "num_base_bdevs": 2, 00:08:36.175 "num_base_bdevs_discovered": 1, 00:08:36.175 "num_base_bdevs_operational": 1, 00:08:36.175 "base_bdevs_list": [ 00:08:36.175 { 00:08:36.175 "name": null, 00:08:36.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.175 "is_configured": false, 00:08:36.175 "data_offset": 0, 00:08:36.175 "data_size": 63488 00:08:36.175 }, 00:08:36.175 { 00:08:36.175 "name": "BaseBdev2", 00:08:36.175 "uuid": "a3f76f8b-de1c-4ff5-80cd-3cfb53c23cba", 00:08:36.175 "is_configured": true, 00:08:36.175 "data_offset": 2048, 00:08:36.175 "data_size": 63488 00:08:36.175 } 00:08:36.175 ] 00:08:36.175 }' 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.175 03:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.744 [2024-11-05 03:19:50.218414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.744 [2024-11-05 03:19:50.218658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61692 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61692 ']' 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61692 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:36.744 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61692 00:08:37.004 killing process with pid 61692 00:08:37.004 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:37.004 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:37.004 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61692' 00:08:37.004 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61692 00:08:37.004 [2024-11-05 03:19:50.391348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.004 03:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61692 00:08:37.004 [2024-11-05 03:19:50.405650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.942 03:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.942 00:08:37.942 real 0m5.406s 00:08:37.942 user 0m8.274s 00:08:37.942 sys 0m0.756s 00:08:37.942 ************************************ 00:08:37.942 END TEST raid_state_function_test_sb 00:08:37.942 ************************************ 00:08:37.942 03:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.942 03:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.942 03:19:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:37.942 03:19:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:37.942 03:19:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.942 03:19:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.942 ************************************ 00:08:37.942 START TEST raid_superblock_test 00:08:37.942 ************************************ 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61944 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61944 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61944 ']' 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.942 03:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.942 [2024-11-05 03:19:51.509225] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:37.942 [2024-11-05 03:19:51.509454] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61944 ] 00:08:38.201 [2024-11-05 03:19:51.694939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.201 [2024-11-05 03:19:51.809759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.460 [2024-11-05 03:19:52.011256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.460 [2024-11-05 03:19:52.011602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.029 malloc1 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.029 [2024-11-05 03:19:52.519825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:39.029 [2024-11-05 03:19:52.519898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.029 [2024-11-05 03:19:52.519930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:39.029 [2024-11-05 03:19:52.519944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.029 [2024-11-05 03:19:52.522833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.029 [2024-11-05 03:19:52.522874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:39.029 pt1 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.029 malloc2 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.029 [2024-11-05 03:19:52.577148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:39.029 [2024-11-05 03:19:52.577222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.029 [2024-11-05 03:19:52.577268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:39.029 [2024-11-05 03:19:52.577297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.029 [2024-11-05 03:19:52.580099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.029 [2024-11-05 03:19:52.580141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:39.029 pt2 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.029 [2024-11-05 03:19:52.585230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:39.029 [2024-11-05 03:19:52.587872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:39.029 [2024-11-05 03:19:52.588250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:39.029 [2024-11-05 03:19:52.588406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:39.029 [2024-11-05 03:19:52.588806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:39.029 [2024-11-05 03:19:52.589173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:39.029 [2024-11-05 03:19:52.589330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:39.029 [2024-11-05 03:19:52.589580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.029 "name": "raid_bdev1", 00:08:39.029 "uuid": "8d47fb9c-b202-4639-b6d8-67f7af03efb5", 00:08:39.029 "strip_size_kb": 64, 00:08:39.029 "state": "online", 00:08:39.029 "raid_level": "concat", 00:08:39.029 "superblock": true, 00:08:39.029 "num_base_bdevs": 2, 00:08:39.029 "num_base_bdevs_discovered": 2, 00:08:39.029 "num_base_bdevs_operational": 2, 00:08:39.029 "base_bdevs_list": [ 00:08:39.029 { 00:08:39.029 "name": "pt1", 00:08:39.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.029 "is_configured": true, 00:08:39.029 "data_offset": 2048, 00:08:39.029 "data_size": 63488 00:08:39.029 }, 00:08:39.029 { 00:08:39.029 "name": "pt2", 00:08:39.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.029 "is_configured": true, 00:08:39.029 "data_offset": 2048, 00:08:39.029 "data_size": 63488 00:08:39.029 } 00:08:39.029 ] 00:08:39.029 }' 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.029 03:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.598 [2024-11-05 03:19:53.098530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.598 "name": "raid_bdev1", 00:08:39.598 "aliases": [ 00:08:39.598 "8d47fb9c-b202-4639-b6d8-67f7af03efb5" 00:08:39.598 ], 00:08:39.598 "product_name": "Raid Volume", 00:08:39.598 "block_size": 512, 00:08:39.598 "num_blocks": 126976, 00:08:39.598 "uuid": "8d47fb9c-b202-4639-b6d8-67f7af03efb5", 00:08:39.598 "assigned_rate_limits": { 00:08:39.598 "rw_ios_per_sec": 0, 00:08:39.598 "rw_mbytes_per_sec": 0, 00:08:39.598 "r_mbytes_per_sec": 0, 00:08:39.598 "w_mbytes_per_sec": 0 00:08:39.598 }, 00:08:39.598 "claimed": false, 00:08:39.598 "zoned": false, 00:08:39.598 "supported_io_types": { 00:08:39.598 "read": true, 00:08:39.598 "write": true, 00:08:39.598 "unmap": true, 00:08:39.598 "flush": true, 00:08:39.598 "reset": true, 00:08:39.598 "nvme_admin": false, 00:08:39.598 "nvme_io": false, 00:08:39.598 "nvme_io_md": false, 00:08:39.598 "write_zeroes": true, 00:08:39.598 "zcopy": false, 00:08:39.598 "get_zone_info": false, 00:08:39.598 "zone_management": false, 00:08:39.598 "zone_append": false, 00:08:39.598 "compare": false, 00:08:39.598 "compare_and_write": false, 00:08:39.598 "abort": false, 00:08:39.598 "seek_hole": false, 00:08:39.598 "seek_data": false, 00:08:39.598 "copy": false, 00:08:39.598 "nvme_iov_md": false 00:08:39.598 }, 00:08:39.598 "memory_domains": [ 00:08:39.598 { 00:08:39.598 "dma_device_id": "system", 00:08:39.598 "dma_device_type": 1 00:08:39.598 }, 00:08:39.598 { 00:08:39.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.598 "dma_device_type": 2 00:08:39.598 }, 00:08:39.598 { 00:08:39.598 "dma_device_id": "system", 00:08:39.598 "dma_device_type": 1 00:08:39.598 }, 00:08:39.598 { 00:08:39.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.598 "dma_device_type": 2 00:08:39.598 } 00:08:39.598 ], 00:08:39.598 "driver_specific": { 00:08:39.598 "raid": { 00:08:39.598 "uuid": "8d47fb9c-b202-4639-b6d8-67f7af03efb5", 00:08:39.598 "strip_size_kb": 64, 00:08:39.598 "state": "online", 00:08:39.598 "raid_level": "concat", 00:08:39.598 "superblock": true, 00:08:39.598 "num_base_bdevs": 2, 00:08:39.598 "num_base_bdevs_discovered": 2, 00:08:39.598 "num_base_bdevs_operational": 2, 00:08:39.598 "base_bdevs_list": [ 00:08:39.598 { 00:08:39.598 "name": "pt1", 00:08:39.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.598 "is_configured": true, 00:08:39.598 "data_offset": 2048, 00:08:39.598 "data_size": 63488 00:08:39.598 }, 00:08:39.598 { 00:08:39.598 "name": "pt2", 00:08:39.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.598 "is_configured": true, 00:08:39.598 "data_offset": 2048, 00:08:39.598 "data_size": 63488 00:08:39.598 } 00:08:39.598 ] 00:08:39.598 } 00:08:39.598 } 00:08:39.598 }' 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.598 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:39.598 pt2' 00:08:39.599 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:39.858 [2024-11-05 03:19:53.366517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8d47fb9c-b202-4639-b6d8-67f7af03efb5 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8d47fb9c-b202-4639-b6d8-67f7af03efb5 ']' 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.858 [2024-11-05 03:19:53.418159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.858 [2024-11-05 03:19:53.418374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.858 [2024-11-05 03:19:53.418499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.858 [2024-11-05 03:19:53.418565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.858 [2024-11-05 03:19:53.418584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.858 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.859 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.118 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.118 [2024-11-05 03:19:53.558268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:40.118 [2024-11-05 03:19:53.560964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:40.118 [2024-11-05 03:19:53.561044] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:40.118 [2024-11-05 03:19:53.561141] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:40.118 [2024-11-05 03:19:53.561166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.118 [2024-11-05 03:19:53.561180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:40.118 request: 00:08:40.118 { 00:08:40.118 "name": "raid_bdev1", 00:08:40.118 "raid_level": "concat", 00:08:40.118 "base_bdevs": [ 00:08:40.118 "malloc1", 00:08:40.118 "malloc2" 00:08:40.118 ], 00:08:40.118 "strip_size_kb": 64, 00:08:40.119 "superblock": false, 00:08:40.119 "method": "bdev_raid_create", 00:08:40.119 "req_id": 1 00:08:40.119 } 00:08:40.119 Got JSON-RPC error response 00:08:40.119 response: 00:08:40.119 { 00:08:40.119 "code": -17, 00:08:40.119 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:40.119 } 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.119 [2024-11-05 03:19:53.622261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:40.119 [2024-11-05 03:19:53.622520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.119 [2024-11-05 03:19:53.622691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:40.119 [2024-11-05 03:19:53.622872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.119 [2024-11-05 03:19:53.625866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.119 [2024-11-05 03:19:53.626054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:40.119 [2024-11-05 03:19:53.626264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:40.119 [2024-11-05 03:19:53.626475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:40.119 pt1 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.119 "name": "raid_bdev1", 00:08:40.119 "uuid": "8d47fb9c-b202-4639-b6d8-67f7af03efb5", 00:08:40.119 "strip_size_kb": 64, 00:08:40.119 "state": "configuring", 00:08:40.119 "raid_level": "concat", 00:08:40.119 "superblock": true, 00:08:40.119 "num_base_bdevs": 2, 00:08:40.119 "num_base_bdevs_discovered": 1, 00:08:40.119 "num_base_bdevs_operational": 2, 00:08:40.119 "base_bdevs_list": [ 00:08:40.119 { 00:08:40.119 "name": "pt1", 00:08:40.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.119 "is_configured": true, 00:08:40.119 "data_offset": 2048, 00:08:40.119 "data_size": 63488 00:08:40.119 }, 00:08:40.119 { 00:08:40.119 "name": null, 00:08:40.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.119 "is_configured": false, 00:08:40.119 "data_offset": 2048, 00:08:40.119 "data_size": 63488 00:08:40.119 } 00:08:40.119 ] 00:08:40.119 }' 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.119 03:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.687 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:40.687 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.688 [2024-11-05 03:19:54.134660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.688 [2024-11-05 03:19:54.134970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.688 [2024-11-05 03:19:54.135045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:40.688 [2024-11-05 03:19:54.135296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.688 [2024-11-05 03:19:54.135914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.688 [2024-11-05 03:19:54.135968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.688 [2024-11-05 03:19:54.136061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:40.688 [2024-11-05 03:19:54.136095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.688 [2024-11-05 03:19:54.136224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.688 [2024-11-05 03:19:54.136244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:40.688 [2024-11-05 03:19:54.136566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:40.688 [2024-11-05 03:19:54.136761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.688 [2024-11-05 03:19:54.136784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:40.688 [2024-11-05 03:19:54.136945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.688 pt2 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.688 "name": "raid_bdev1", 00:08:40.688 "uuid": "8d47fb9c-b202-4639-b6d8-67f7af03efb5", 00:08:40.688 "strip_size_kb": 64, 00:08:40.688 "state": "online", 00:08:40.688 "raid_level": "concat", 00:08:40.688 "superblock": true, 00:08:40.688 "num_base_bdevs": 2, 00:08:40.688 "num_base_bdevs_discovered": 2, 00:08:40.688 "num_base_bdevs_operational": 2, 00:08:40.688 "base_bdevs_list": [ 00:08:40.688 { 00:08:40.688 "name": "pt1", 00:08:40.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.688 "is_configured": true, 00:08:40.688 "data_offset": 2048, 00:08:40.688 "data_size": 63488 00:08:40.688 }, 00:08:40.688 { 00:08:40.688 "name": "pt2", 00:08:40.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.688 "is_configured": true, 00:08:40.688 "data_offset": 2048, 00:08:40.688 "data_size": 63488 00:08:40.688 } 00:08:40.688 ] 00:08:40.688 }' 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.688 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.256 [2024-11-05 03:19:54.659065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.256 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.256 "name": "raid_bdev1", 00:08:41.256 "aliases": [ 00:08:41.256 "8d47fb9c-b202-4639-b6d8-67f7af03efb5" 00:08:41.256 ], 00:08:41.256 "product_name": "Raid Volume", 00:08:41.256 "block_size": 512, 00:08:41.256 "num_blocks": 126976, 00:08:41.256 "uuid": "8d47fb9c-b202-4639-b6d8-67f7af03efb5", 00:08:41.256 "assigned_rate_limits": { 00:08:41.256 "rw_ios_per_sec": 0, 00:08:41.256 "rw_mbytes_per_sec": 0, 00:08:41.256 "r_mbytes_per_sec": 0, 00:08:41.256 "w_mbytes_per_sec": 0 00:08:41.256 }, 00:08:41.256 "claimed": false, 00:08:41.256 "zoned": false, 00:08:41.256 "supported_io_types": { 00:08:41.256 "read": true, 00:08:41.256 "write": true, 00:08:41.256 "unmap": true, 00:08:41.256 "flush": true, 00:08:41.256 "reset": true, 00:08:41.256 "nvme_admin": false, 00:08:41.256 "nvme_io": false, 00:08:41.256 "nvme_io_md": false, 00:08:41.256 "write_zeroes": true, 00:08:41.256 "zcopy": false, 00:08:41.256 "get_zone_info": false, 00:08:41.256 "zone_management": false, 00:08:41.256 "zone_append": false, 00:08:41.256 "compare": false, 00:08:41.256 "compare_and_write": false, 00:08:41.256 "abort": false, 00:08:41.256 "seek_hole": false, 00:08:41.256 "seek_data": false, 00:08:41.256 "copy": false, 00:08:41.256 "nvme_iov_md": false 00:08:41.256 }, 00:08:41.256 "memory_domains": [ 00:08:41.256 { 00:08:41.256 "dma_device_id": "system", 00:08:41.256 "dma_device_type": 1 00:08:41.256 }, 00:08:41.256 { 00:08:41.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.256 "dma_device_type": 2 00:08:41.256 }, 00:08:41.256 { 00:08:41.256 "dma_device_id": "system", 00:08:41.256 "dma_device_type": 1 00:08:41.256 }, 00:08:41.256 { 00:08:41.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.256 "dma_device_type": 2 00:08:41.256 } 00:08:41.256 ], 00:08:41.256 "driver_specific": { 00:08:41.256 "raid": { 00:08:41.256 "uuid": "8d47fb9c-b202-4639-b6d8-67f7af03efb5", 00:08:41.256 "strip_size_kb": 64, 00:08:41.256 "state": "online", 00:08:41.256 "raid_level": "concat", 00:08:41.256 "superblock": true, 00:08:41.256 "num_base_bdevs": 2, 00:08:41.256 "num_base_bdevs_discovered": 2, 00:08:41.256 "num_base_bdevs_operational": 2, 00:08:41.256 "base_bdevs_list": [ 00:08:41.256 { 00:08:41.256 "name": "pt1", 00:08:41.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.256 "is_configured": true, 00:08:41.256 "data_offset": 2048, 00:08:41.256 "data_size": 63488 00:08:41.256 }, 00:08:41.256 { 00:08:41.256 "name": "pt2", 00:08:41.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.257 "is_configured": true, 00:08:41.257 "data_offset": 2048, 00:08:41.257 "data_size": 63488 00:08:41.257 } 00:08:41.257 ] 00:08:41.257 } 00:08:41.257 } 00:08:41.257 }' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:41.257 pt2' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.257 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.516 [2024-11-05 03:19:54.931164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8d47fb9c-b202-4639-b6d8-67f7af03efb5 '!=' 8d47fb9c-b202-4639-b6d8-67f7af03efb5 ']' 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61944 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61944 ']' 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61944 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:41.516 03:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61944 00:08:41.516 killing process with pid 61944 00:08:41.516 03:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:41.516 03:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:41.516 03:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61944' 00:08:41.516 03:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61944 00:08:41.516 03:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61944 00:08:41.516 [2024-11-05 03:19:55.014135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.516 [2024-11-05 03:19:55.014289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.516 [2024-11-05 03:19:55.014405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.516 [2024-11-05 03:19:55.014427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:41.775 [2024-11-05 03:19:55.185073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.725 ************************************ 00:08:42.725 END TEST raid_superblock_test 00:08:42.725 ************************************ 00:08:42.725 03:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:42.725 00:08:42.725 real 0m4.756s 00:08:42.725 user 0m7.015s 00:08:42.725 sys 0m0.730s 00:08:42.725 03:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.725 03:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.725 03:19:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:42.725 03:19:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:42.725 03:19:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.725 03:19:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.725 ************************************ 00:08:42.725 START TEST raid_read_error_test 00:08:42.725 ************************************ 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:42.725 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iwEAWTxy1T 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62161 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62161 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62161 ']' 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.726 03:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.726 [2024-11-05 03:19:56.339644] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:42.726 [2024-11-05 03:19:56.339861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62161 ] 00:08:42.994 [2024-11-05 03:19:56.527882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.253 [2024-11-05 03:19:56.651221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.253 [2024-11-05 03:19:56.849598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.253 [2024-11-05 03:19:56.849653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.821 BaseBdev1_malloc 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.821 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.822 true 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.822 [2024-11-05 03:19:57.350784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:43.822 [2024-11-05 03:19:57.350882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.822 [2024-11-05 03:19:57.350909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:43.822 [2024-11-05 03:19:57.350926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.822 [2024-11-05 03:19:57.353865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.822 [2024-11-05 03:19:57.353952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:43.822 BaseBdev1 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.822 BaseBdev2_malloc 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.822 true 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.822 [2024-11-05 03:19:57.410461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:43.822 [2024-11-05 03:19:57.410555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.822 [2024-11-05 03:19:57.410579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:43.822 [2024-11-05 03:19:57.410595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.822 [2024-11-05 03:19:57.413414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.822 [2024-11-05 03:19:57.413472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:43.822 BaseBdev2 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.822 [2024-11-05 03:19:57.418542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.822 [2024-11-05 03:19:57.420980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.822 [2024-11-05 03:19:57.421219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.822 [2024-11-05 03:19:57.421258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:43.822 [2024-11-05 03:19:57.421595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.822 [2024-11-05 03:19:57.421856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.822 [2024-11-05 03:19:57.421874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:43.822 [2024-11-05 03:19:57.422089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.822 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.081 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.081 "name": "raid_bdev1", 00:08:44.081 "uuid": "bb5a5c49-a492-4f65-944e-125e3a3feefe", 00:08:44.081 "strip_size_kb": 64, 00:08:44.081 "state": "online", 00:08:44.081 "raid_level": "concat", 00:08:44.081 "superblock": true, 00:08:44.081 "num_base_bdevs": 2, 00:08:44.081 "num_base_bdevs_discovered": 2, 00:08:44.081 "num_base_bdevs_operational": 2, 00:08:44.081 "base_bdevs_list": [ 00:08:44.081 { 00:08:44.081 "name": "BaseBdev1", 00:08:44.081 "uuid": "34520e67-6ecc-5494-b4d8-c725291a899c", 00:08:44.081 "is_configured": true, 00:08:44.081 "data_offset": 2048, 00:08:44.081 "data_size": 63488 00:08:44.081 }, 00:08:44.081 { 00:08:44.081 "name": "BaseBdev2", 00:08:44.081 "uuid": "ac3cdd23-8df1-5a69-a6d3-6dd17acfa991", 00:08:44.081 "is_configured": true, 00:08:44.081 "data_offset": 2048, 00:08:44.081 "data_size": 63488 00:08:44.081 } 00:08:44.081 ] 00:08:44.081 }' 00:08:44.081 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.081 03:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.341 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:44.341 03:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:44.600 [2024-11-05 03:19:58.060076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.538 03:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.538 03:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.538 "name": "raid_bdev1", 00:08:45.538 "uuid": "bb5a5c49-a492-4f65-944e-125e3a3feefe", 00:08:45.538 "strip_size_kb": 64, 00:08:45.538 "state": "online", 00:08:45.538 "raid_level": "concat", 00:08:45.538 "superblock": true, 00:08:45.538 "num_base_bdevs": 2, 00:08:45.538 "num_base_bdevs_discovered": 2, 00:08:45.538 "num_base_bdevs_operational": 2, 00:08:45.538 "base_bdevs_list": [ 00:08:45.538 { 00:08:45.538 "name": "BaseBdev1", 00:08:45.538 "uuid": "34520e67-6ecc-5494-b4d8-c725291a899c", 00:08:45.538 "is_configured": true, 00:08:45.538 "data_offset": 2048, 00:08:45.538 "data_size": 63488 00:08:45.538 }, 00:08:45.538 { 00:08:45.538 "name": "BaseBdev2", 00:08:45.538 "uuid": "ac3cdd23-8df1-5a69-a6d3-6dd17acfa991", 00:08:45.538 "is_configured": true, 00:08:45.538 "data_offset": 2048, 00:08:45.538 "data_size": 63488 00:08:45.538 } 00:08:45.538 ] 00:08:45.538 }' 00:08:45.538 03:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.538 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.106 03:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.107 [2024-11-05 03:19:59.487114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.107 [2024-11-05 03:19:59.487173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.107 [2024-11-05 03:19:59.490826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.107 [2024-11-05 03:19:59.490884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.107 [2024-11-05 03:19:59.490925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.107 [2024-11-05 03:19:59.490946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.107 { 00:08:46.107 "results": [ 00:08:46.107 { 00:08:46.107 "job": "raid_bdev1", 00:08:46.107 "core_mask": "0x1", 00:08:46.107 "workload": "randrw", 00:08:46.107 "percentage": 50, 00:08:46.107 "status": "finished", 00:08:46.107 "queue_depth": 1, 00:08:46.107 "io_size": 131072, 00:08:46.107 "runtime": 1.424412, 00:08:46.107 "iops": 11191.986588150057, 00:08:46.107 "mibps": 1398.9983235187572, 00:08:46.107 "io_failed": 1, 00:08:46.107 "io_timeout": 0, 00:08:46.107 "avg_latency_us": 124.48556710554075, 00:08:46.107 "min_latency_us": 36.53818181818182, 00:08:46.107 "max_latency_us": 1951.1854545454546 00:08:46.107 } 00:08:46.107 ], 00:08:46.107 "core_count": 1 00:08:46.107 } 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62161 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62161 ']' 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62161 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62161 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:46.107 killing process with pid 62161 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62161' 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62161 00:08:46.107 [2024-11-05 03:19:59.529364] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.107 03:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62161 00:08:46.107 [2024-11-05 03:19:59.658099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iwEAWTxy1T 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:47.486 00:08:47.486 real 0m4.575s 00:08:47.486 user 0m5.723s 00:08:47.486 sys 0m0.568s 00:08:47.486 ************************************ 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:47.486 03:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 END TEST raid_read_error_test 00:08:47.486 ************************************ 00:08:47.486 03:20:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:47.486 03:20:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:47.486 03:20:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:47.486 03:20:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 ************************************ 00:08:47.486 START TEST raid_write_error_test 00:08:47.486 ************************************ 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LEIPdQ5i9e 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62301 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62301 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62301 ']' 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:47.486 03:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 [2024-11-05 03:20:00.967684] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:47.486 [2024-11-05 03:20:00.967888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62301 ] 00:08:47.745 [2024-11-05 03:20:01.154935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.745 [2024-11-05 03:20:01.288829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.004 [2024-11-05 03:20:01.496142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.004 [2024-11-05 03:20:01.496227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.573 03:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.573 03:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:48.573 03:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.573 03:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.573 03:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.573 03:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.573 BaseBdev1_malloc 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.573 true 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.573 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.573 [2024-11-05 03:20:02.025275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.573 [2024-11-05 03:20:02.025378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.573 [2024-11-05 03:20:02.025421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:48.573 [2024-11-05 03:20:02.025440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.573 [2024-11-05 03:20:02.028466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.573 [2024-11-05 03:20:02.028514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.573 BaseBdev1 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.574 BaseBdev2_malloc 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.574 true 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.574 [2024-11-05 03:20:02.082031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.574 [2024-11-05 03:20:02.082096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.574 [2024-11-05 03:20:02.082121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.574 [2024-11-05 03:20:02.082138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.574 [2024-11-05 03:20:02.085006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.574 [2024-11-05 03:20:02.085063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.574 BaseBdev2 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.574 [2024-11-05 03:20:02.090109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.574 [2024-11-05 03:20:02.092767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.574 [2024-11-05 03:20:02.093053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.574 [2024-11-05 03:20:02.093091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:48.574 [2024-11-05 03:20:02.093405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.574 [2024-11-05 03:20:02.093645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.574 [2024-11-05 03:20:02.093675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:48.574 [2024-11-05 03:20:02.093860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.574 "name": "raid_bdev1", 00:08:48.574 "uuid": "dae24e43-5718-46f8-b85c-8d1943c06907", 00:08:48.574 "strip_size_kb": 64, 00:08:48.574 "state": "online", 00:08:48.574 "raid_level": "concat", 00:08:48.574 "superblock": true, 00:08:48.574 "num_base_bdevs": 2, 00:08:48.574 "num_base_bdevs_discovered": 2, 00:08:48.574 "num_base_bdevs_operational": 2, 00:08:48.574 "base_bdevs_list": [ 00:08:48.574 { 00:08:48.574 "name": "BaseBdev1", 00:08:48.574 "uuid": "cc05025f-bf59-5283-afa4-763e2189b232", 00:08:48.574 "is_configured": true, 00:08:48.574 "data_offset": 2048, 00:08:48.574 "data_size": 63488 00:08:48.574 }, 00:08:48.574 { 00:08:48.574 "name": "BaseBdev2", 00:08:48.574 "uuid": "677814d3-8afd-59f3-bd32-b76802389bf1", 00:08:48.574 "is_configured": true, 00:08:48.574 "data_offset": 2048, 00:08:48.574 "data_size": 63488 00:08:48.574 } 00:08:48.574 ] 00:08:48.574 }' 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.574 03:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.143 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:49.143 03:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:49.143 [2024-11-05 03:20:02.723758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.081 "name": "raid_bdev1", 00:08:50.081 "uuid": "dae24e43-5718-46f8-b85c-8d1943c06907", 00:08:50.081 "strip_size_kb": 64, 00:08:50.081 "state": "online", 00:08:50.081 "raid_level": "concat", 00:08:50.081 "superblock": true, 00:08:50.081 "num_base_bdevs": 2, 00:08:50.081 "num_base_bdevs_discovered": 2, 00:08:50.081 "num_base_bdevs_operational": 2, 00:08:50.081 "base_bdevs_list": [ 00:08:50.081 { 00:08:50.081 "name": "BaseBdev1", 00:08:50.081 "uuid": "cc05025f-bf59-5283-afa4-763e2189b232", 00:08:50.081 "is_configured": true, 00:08:50.081 "data_offset": 2048, 00:08:50.081 "data_size": 63488 00:08:50.081 }, 00:08:50.081 { 00:08:50.081 "name": "BaseBdev2", 00:08:50.081 "uuid": "677814d3-8afd-59f3-bd32-b76802389bf1", 00:08:50.081 "is_configured": true, 00:08:50.081 "data_offset": 2048, 00:08:50.081 "data_size": 63488 00:08:50.081 } 00:08:50.081 ] 00:08:50.081 }' 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.081 03:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.649 [2024-11-05 03:20:04.162877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.649 [2024-11-05 03:20:04.162922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.649 [2024-11-05 03:20:04.166496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.649 [2024-11-05 03:20:04.166557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.649 [2024-11-05 03:20:04.166601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.649 [2024-11-05 03:20:04.166622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:50.649 { 00:08:50.649 "results": [ 00:08:50.649 { 00:08:50.649 "job": "raid_bdev1", 00:08:50.649 "core_mask": "0x1", 00:08:50.649 "workload": "randrw", 00:08:50.649 "percentage": 50, 00:08:50.649 "status": "finished", 00:08:50.649 "queue_depth": 1, 00:08:50.649 "io_size": 131072, 00:08:50.649 "runtime": 1.436679, 00:08:50.649 "iops": 10714.293171961168, 00:08:50.649 "mibps": 1339.286646495146, 00:08:50.649 "io_failed": 1, 00:08:50.649 "io_timeout": 0, 00:08:50.649 "avg_latency_us": 130.26152432470738, 00:08:50.649 "min_latency_us": 37.236363636363635, 00:08:50.649 "max_latency_us": 1951.1854545454546 00:08:50.649 } 00:08:50.649 ], 00:08:50.649 "core_count": 1 00:08:50.649 } 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62301 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62301 ']' 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62301 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62301 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:50.649 killing process with pid 62301 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62301' 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62301 00:08:50.649 [2024-11-05 03:20:04.204087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.649 03:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62301 00:08:50.907 [2024-11-05 03:20:04.331919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LEIPdQ5i9e 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:51.844 00:08:51.844 real 0m4.584s 00:08:51.844 user 0m5.771s 00:08:51.844 sys 0m0.564s 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.844 03:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.844 ************************************ 00:08:51.844 END TEST raid_write_error_test 00:08:51.844 ************************************ 00:08:51.844 03:20:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:51.844 03:20:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:51.844 03:20:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:51.844 03:20:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.844 03:20:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.103 ************************************ 00:08:52.103 START TEST raid_state_function_test 00:08:52.103 ************************************ 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62450 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62450' 00:08:52.103 Process raid pid: 62450 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62450 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62450 ']' 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.103 03:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.103 [2024-11-05 03:20:05.596957] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:52.103 [2024-11-05 03:20:05.597156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.362 [2024-11-05 03:20:05.776652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.362 [2024-11-05 03:20:05.885404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.621 [2024-11-05 03:20:06.073687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.621 [2024-11-05 03:20:06.073755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.880 [2024-11-05 03:20:06.510849] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.880 [2024-11-05 03:20:06.510917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.880 [2024-11-05 03:20:06.510931] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.880 [2024-11-05 03:20:06.510945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.880 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.141 "name": "Existed_Raid", 00:08:53.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.141 "strip_size_kb": 0, 00:08:53.141 "state": "configuring", 00:08:53.141 "raid_level": "raid1", 00:08:53.141 "superblock": false, 00:08:53.141 "num_base_bdevs": 2, 00:08:53.141 "num_base_bdevs_discovered": 0, 00:08:53.141 "num_base_bdevs_operational": 2, 00:08:53.141 "base_bdevs_list": [ 00:08:53.141 { 00:08:53.141 "name": "BaseBdev1", 00:08:53.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.141 "is_configured": false, 00:08:53.141 "data_offset": 0, 00:08:53.141 "data_size": 0 00:08:53.141 }, 00:08:53.141 { 00:08:53.141 "name": "BaseBdev2", 00:08:53.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.141 "is_configured": false, 00:08:53.141 "data_offset": 0, 00:08:53.141 "data_size": 0 00:08:53.141 } 00:08:53.141 ] 00:08:53.141 }' 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.141 03:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.400 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.400 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.400 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.400 [2024-11-05 03:20:07.030965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.400 [2024-11-05 03:20:07.031022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:53.400 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.400 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:53.400 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.400 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.658 [2024-11-05 03:20:07.038956] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.658 [2024-11-05 03:20:07.039031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.658 [2024-11-05 03:20:07.039058] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.658 [2024-11-05 03:20:07.039074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.658 [2024-11-05 03:20:07.082445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.658 BaseBdev1 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.658 [ 00:08:53.658 { 00:08:53.658 "name": "BaseBdev1", 00:08:53.658 "aliases": [ 00:08:53.658 "ac1d194f-9768-4184-9f31-b721f0d30e86" 00:08:53.658 ], 00:08:53.658 "product_name": "Malloc disk", 00:08:53.658 "block_size": 512, 00:08:53.658 "num_blocks": 65536, 00:08:53.658 "uuid": "ac1d194f-9768-4184-9f31-b721f0d30e86", 00:08:53.658 "assigned_rate_limits": { 00:08:53.658 "rw_ios_per_sec": 0, 00:08:53.658 "rw_mbytes_per_sec": 0, 00:08:53.658 "r_mbytes_per_sec": 0, 00:08:53.658 "w_mbytes_per_sec": 0 00:08:53.658 }, 00:08:53.658 "claimed": true, 00:08:53.658 "claim_type": "exclusive_write", 00:08:53.658 "zoned": false, 00:08:53.658 "supported_io_types": { 00:08:53.658 "read": true, 00:08:53.658 "write": true, 00:08:53.658 "unmap": true, 00:08:53.658 "flush": true, 00:08:53.658 "reset": true, 00:08:53.658 "nvme_admin": false, 00:08:53.658 "nvme_io": false, 00:08:53.658 "nvme_io_md": false, 00:08:53.658 "write_zeroes": true, 00:08:53.658 "zcopy": true, 00:08:53.658 "get_zone_info": false, 00:08:53.658 "zone_management": false, 00:08:53.658 "zone_append": false, 00:08:53.658 "compare": false, 00:08:53.658 "compare_and_write": false, 00:08:53.658 "abort": true, 00:08:53.658 "seek_hole": false, 00:08:53.658 "seek_data": false, 00:08:53.658 "copy": true, 00:08:53.658 "nvme_iov_md": false 00:08:53.658 }, 00:08:53.658 "memory_domains": [ 00:08:53.658 { 00:08:53.658 "dma_device_id": "system", 00:08:53.658 "dma_device_type": 1 00:08:53.658 }, 00:08:53.658 { 00:08:53.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.658 "dma_device_type": 2 00:08:53.658 } 00:08:53.658 ], 00:08:53.658 "driver_specific": {} 00:08:53.658 } 00:08:53.658 ] 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.658 "name": "Existed_Raid", 00:08:53.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.658 "strip_size_kb": 0, 00:08:53.658 "state": "configuring", 00:08:53.658 "raid_level": "raid1", 00:08:53.658 "superblock": false, 00:08:53.658 "num_base_bdevs": 2, 00:08:53.658 "num_base_bdevs_discovered": 1, 00:08:53.658 "num_base_bdevs_operational": 2, 00:08:53.658 "base_bdevs_list": [ 00:08:53.658 { 00:08:53.658 "name": "BaseBdev1", 00:08:53.658 "uuid": "ac1d194f-9768-4184-9f31-b721f0d30e86", 00:08:53.658 "is_configured": true, 00:08:53.658 "data_offset": 0, 00:08:53.658 "data_size": 65536 00:08:53.658 }, 00:08:53.658 { 00:08:53.658 "name": "BaseBdev2", 00:08:53.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.658 "is_configured": false, 00:08:53.658 "data_offset": 0, 00:08:53.658 "data_size": 0 00:08:53.658 } 00:08:53.658 ] 00:08:53.658 }' 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.658 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.226 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.226 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.226 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.226 [2024-11-05 03:20:07.614703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.227 [2024-11-05 03:20:07.614764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.227 [2024-11-05 03:20:07.622747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.227 [2024-11-05 03:20:07.625216] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.227 [2024-11-05 03:20:07.625277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.227 "name": "Existed_Raid", 00:08:54.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.227 "strip_size_kb": 0, 00:08:54.227 "state": "configuring", 00:08:54.227 "raid_level": "raid1", 00:08:54.227 "superblock": false, 00:08:54.227 "num_base_bdevs": 2, 00:08:54.227 "num_base_bdevs_discovered": 1, 00:08:54.227 "num_base_bdevs_operational": 2, 00:08:54.227 "base_bdevs_list": [ 00:08:54.227 { 00:08:54.227 "name": "BaseBdev1", 00:08:54.227 "uuid": "ac1d194f-9768-4184-9f31-b721f0d30e86", 00:08:54.227 "is_configured": true, 00:08:54.227 "data_offset": 0, 00:08:54.227 "data_size": 65536 00:08:54.227 }, 00:08:54.227 { 00:08:54.227 "name": "BaseBdev2", 00:08:54.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.227 "is_configured": false, 00:08:54.227 "data_offset": 0, 00:08:54.227 "data_size": 0 00:08:54.227 } 00:08:54.227 ] 00:08:54.227 }' 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.227 03:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.795 [2024-11-05 03:20:08.184311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.795 [2024-11-05 03:20:08.184414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.795 [2024-11-05 03:20:08.184426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:54.795 [2024-11-05 03:20:08.184796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:54.795 [2024-11-05 03:20:08.185029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.795 [2024-11-05 03:20:08.185064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:54.795 [2024-11-05 03:20:08.185416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.795 BaseBdev2 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.795 [ 00:08:54.795 { 00:08:54.795 "name": "BaseBdev2", 00:08:54.795 "aliases": [ 00:08:54.795 "3b69ef86-ac99-4c71-882a-cb22b37ba359" 00:08:54.795 ], 00:08:54.795 "product_name": "Malloc disk", 00:08:54.795 "block_size": 512, 00:08:54.795 "num_blocks": 65536, 00:08:54.795 "uuid": "3b69ef86-ac99-4c71-882a-cb22b37ba359", 00:08:54.795 "assigned_rate_limits": { 00:08:54.795 "rw_ios_per_sec": 0, 00:08:54.795 "rw_mbytes_per_sec": 0, 00:08:54.795 "r_mbytes_per_sec": 0, 00:08:54.795 "w_mbytes_per_sec": 0 00:08:54.795 }, 00:08:54.795 "claimed": true, 00:08:54.795 "claim_type": "exclusive_write", 00:08:54.795 "zoned": false, 00:08:54.795 "supported_io_types": { 00:08:54.795 "read": true, 00:08:54.795 "write": true, 00:08:54.795 "unmap": true, 00:08:54.795 "flush": true, 00:08:54.795 "reset": true, 00:08:54.795 "nvme_admin": false, 00:08:54.795 "nvme_io": false, 00:08:54.795 "nvme_io_md": false, 00:08:54.795 "write_zeroes": true, 00:08:54.795 "zcopy": true, 00:08:54.795 "get_zone_info": false, 00:08:54.795 "zone_management": false, 00:08:54.795 "zone_append": false, 00:08:54.795 "compare": false, 00:08:54.795 "compare_and_write": false, 00:08:54.795 "abort": true, 00:08:54.795 "seek_hole": false, 00:08:54.795 "seek_data": false, 00:08:54.795 "copy": true, 00:08:54.795 "nvme_iov_md": false 00:08:54.795 }, 00:08:54.795 "memory_domains": [ 00:08:54.795 { 00:08:54.795 "dma_device_id": "system", 00:08:54.795 "dma_device_type": 1 00:08:54.795 }, 00:08:54.795 { 00:08:54.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.795 "dma_device_type": 2 00:08:54.795 } 00:08:54.795 ], 00:08:54.795 "driver_specific": {} 00:08:54.795 } 00:08:54.795 ] 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.795 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.796 "name": "Existed_Raid", 00:08:54.796 "uuid": "f5004f7f-b039-4290-a606-24db86bdd1b1", 00:08:54.796 "strip_size_kb": 0, 00:08:54.796 "state": "online", 00:08:54.796 "raid_level": "raid1", 00:08:54.796 "superblock": false, 00:08:54.796 "num_base_bdevs": 2, 00:08:54.796 "num_base_bdevs_discovered": 2, 00:08:54.796 "num_base_bdevs_operational": 2, 00:08:54.796 "base_bdevs_list": [ 00:08:54.796 { 00:08:54.796 "name": "BaseBdev1", 00:08:54.796 "uuid": "ac1d194f-9768-4184-9f31-b721f0d30e86", 00:08:54.796 "is_configured": true, 00:08:54.796 "data_offset": 0, 00:08:54.796 "data_size": 65536 00:08:54.796 }, 00:08:54.796 { 00:08:54.796 "name": "BaseBdev2", 00:08:54.796 "uuid": "3b69ef86-ac99-4c71-882a-cb22b37ba359", 00:08:54.796 "is_configured": true, 00:08:54.796 "data_offset": 0, 00:08:54.796 "data_size": 65536 00:08:54.796 } 00:08:54.796 ] 00:08:54.796 }' 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.796 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.364 [2024-11-05 03:20:08.744943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.364 "name": "Existed_Raid", 00:08:55.364 "aliases": [ 00:08:55.364 "f5004f7f-b039-4290-a606-24db86bdd1b1" 00:08:55.364 ], 00:08:55.364 "product_name": "Raid Volume", 00:08:55.364 "block_size": 512, 00:08:55.364 "num_blocks": 65536, 00:08:55.364 "uuid": "f5004f7f-b039-4290-a606-24db86bdd1b1", 00:08:55.364 "assigned_rate_limits": { 00:08:55.364 "rw_ios_per_sec": 0, 00:08:55.364 "rw_mbytes_per_sec": 0, 00:08:55.364 "r_mbytes_per_sec": 0, 00:08:55.364 "w_mbytes_per_sec": 0 00:08:55.364 }, 00:08:55.364 "claimed": false, 00:08:55.364 "zoned": false, 00:08:55.364 "supported_io_types": { 00:08:55.364 "read": true, 00:08:55.364 "write": true, 00:08:55.364 "unmap": false, 00:08:55.364 "flush": false, 00:08:55.364 "reset": true, 00:08:55.364 "nvme_admin": false, 00:08:55.364 "nvme_io": false, 00:08:55.364 "nvme_io_md": false, 00:08:55.364 "write_zeroes": true, 00:08:55.364 "zcopy": false, 00:08:55.364 "get_zone_info": false, 00:08:55.364 "zone_management": false, 00:08:55.364 "zone_append": false, 00:08:55.364 "compare": false, 00:08:55.364 "compare_and_write": false, 00:08:55.364 "abort": false, 00:08:55.364 "seek_hole": false, 00:08:55.364 "seek_data": false, 00:08:55.364 "copy": false, 00:08:55.364 "nvme_iov_md": false 00:08:55.364 }, 00:08:55.364 "memory_domains": [ 00:08:55.364 { 00:08:55.364 "dma_device_id": "system", 00:08:55.364 "dma_device_type": 1 00:08:55.364 }, 00:08:55.364 { 00:08:55.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.364 "dma_device_type": 2 00:08:55.364 }, 00:08:55.364 { 00:08:55.364 "dma_device_id": "system", 00:08:55.364 "dma_device_type": 1 00:08:55.364 }, 00:08:55.364 { 00:08:55.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.364 "dma_device_type": 2 00:08:55.364 } 00:08:55.364 ], 00:08:55.364 "driver_specific": { 00:08:55.364 "raid": { 00:08:55.364 "uuid": "f5004f7f-b039-4290-a606-24db86bdd1b1", 00:08:55.364 "strip_size_kb": 0, 00:08:55.364 "state": "online", 00:08:55.364 "raid_level": "raid1", 00:08:55.364 "superblock": false, 00:08:55.364 "num_base_bdevs": 2, 00:08:55.364 "num_base_bdevs_discovered": 2, 00:08:55.364 "num_base_bdevs_operational": 2, 00:08:55.364 "base_bdevs_list": [ 00:08:55.364 { 00:08:55.364 "name": "BaseBdev1", 00:08:55.364 "uuid": "ac1d194f-9768-4184-9f31-b721f0d30e86", 00:08:55.364 "is_configured": true, 00:08:55.364 "data_offset": 0, 00:08:55.364 "data_size": 65536 00:08:55.364 }, 00:08:55.364 { 00:08:55.364 "name": "BaseBdev2", 00:08:55.364 "uuid": "3b69ef86-ac99-4c71-882a-cb22b37ba359", 00:08:55.364 "is_configured": true, 00:08:55.364 "data_offset": 0, 00:08:55.364 "data_size": 65536 00:08:55.364 } 00:08:55.364 ] 00:08:55.364 } 00:08:55.364 } 00:08:55.364 }' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.364 BaseBdev2' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.364 03:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.364 [2024-11-05 03:20:09.000731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.623 "name": "Existed_Raid", 00:08:55.623 "uuid": "f5004f7f-b039-4290-a606-24db86bdd1b1", 00:08:55.623 "strip_size_kb": 0, 00:08:55.623 "state": "online", 00:08:55.623 "raid_level": "raid1", 00:08:55.623 "superblock": false, 00:08:55.623 "num_base_bdevs": 2, 00:08:55.623 "num_base_bdevs_discovered": 1, 00:08:55.623 "num_base_bdevs_operational": 1, 00:08:55.623 "base_bdevs_list": [ 00:08:55.623 { 00:08:55.623 "name": null, 00:08:55.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.623 "is_configured": false, 00:08:55.623 "data_offset": 0, 00:08:55.623 "data_size": 65536 00:08:55.623 }, 00:08:55.623 { 00:08:55.623 "name": "BaseBdev2", 00:08:55.623 "uuid": "3b69ef86-ac99-4c71-882a-cb22b37ba359", 00:08:55.623 "is_configured": true, 00:08:55.623 "data_offset": 0, 00:08:55.623 "data_size": 65536 00:08:55.623 } 00:08:55.623 ] 00:08:55.623 }' 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.623 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.191 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.191 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.191 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.191 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.191 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.191 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.192 [2024-11-05 03:20:09.655802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.192 [2024-11-05 03:20:09.655907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.192 [2024-11-05 03:20:09.730760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.192 [2024-11-05 03:20:09.730819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.192 [2024-11-05 03:20:09.730837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62450 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62450 ']' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62450 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62450 00:08:56.192 killing process with pid 62450 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62450' 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62450 00:08:56.192 [2024-11-05 03:20:09.822392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.192 03:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62450 00:08:56.451 [2024-11-05 03:20:09.836517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:57.403 00:08:57.403 real 0m5.323s 00:08:57.403 user 0m8.120s 00:08:57.403 sys 0m0.729s 00:08:57.403 ************************************ 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.403 END TEST raid_state_function_test 00:08:57.403 ************************************ 00:08:57.403 03:20:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:57.403 03:20:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:57.403 03:20:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:57.403 03:20:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.403 ************************************ 00:08:57.403 START TEST raid_state_function_test_sb 00:08:57.403 ************************************ 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62703 00:08:57.403 Process raid pid: 62703 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62703' 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62703 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62703 ']' 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.403 03:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.403 [2024-11-05 03:20:11.026774] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:57.403 [2024-11-05 03:20:11.026951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.679 [2024-11-05 03:20:11.211159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.939 [2024-11-05 03:20:11.329211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.939 [2024-11-05 03:20:11.528387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.939 [2024-11-05 03:20:11.528432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.506 03:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:58.506 03:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:58.506 03:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:58.506 03:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.506 03:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.506 [2024-11-05 03:20:11.999919] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.506 [2024-11-05 03:20:11.999988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.506 [2024-11-05 03:20:12.000018] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.506 [2024-11-05 03:20:12.000032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.506 "name": "Existed_Raid", 00:08:58.506 "uuid": "1ca7961a-9619-423a-9a52-c6207b971db0", 00:08:58.506 "strip_size_kb": 0, 00:08:58.506 "state": "configuring", 00:08:58.506 "raid_level": "raid1", 00:08:58.506 "superblock": true, 00:08:58.506 "num_base_bdevs": 2, 00:08:58.506 "num_base_bdevs_discovered": 0, 00:08:58.506 "num_base_bdevs_operational": 2, 00:08:58.506 "base_bdevs_list": [ 00:08:58.506 { 00:08:58.506 "name": "BaseBdev1", 00:08:58.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.506 "is_configured": false, 00:08:58.506 "data_offset": 0, 00:08:58.506 "data_size": 0 00:08:58.506 }, 00:08:58.506 { 00:08:58.506 "name": "BaseBdev2", 00:08:58.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.506 "is_configured": false, 00:08:58.506 "data_offset": 0, 00:08:58.506 "data_size": 0 00:08:58.506 } 00:08:58.506 ] 00:08:58.506 }' 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.506 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 [2024-11-05 03:20:12.496189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.074 [2024-11-05 03:20:12.496229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 [2024-11-05 03:20:12.508162] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.074 [2024-11-05 03:20:12.508394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.074 [2024-11-05 03:20:12.508530] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.074 [2024-11-05 03:20:12.508568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 [2024-11-05 03:20:12.551529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.074 BaseBdev1 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 [ 00:08:59.074 { 00:08:59.074 "name": "BaseBdev1", 00:08:59.074 "aliases": [ 00:08:59.074 "b376feeb-fe79-4c88-9194-4efc323b2d53" 00:08:59.074 ], 00:08:59.074 "product_name": "Malloc disk", 00:08:59.074 "block_size": 512, 00:08:59.074 "num_blocks": 65536, 00:08:59.074 "uuid": "b376feeb-fe79-4c88-9194-4efc323b2d53", 00:08:59.074 "assigned_rate_limits": { 00:08:59.074 "rw_ios_per_sec": 0, 00:08:59.074 "rw_mbytes_per_sec": 0, 00:08:59.074 "r_mbytes_per_sec": 0, 00:08:59.074 "w_mbytes_per_sec": 0 00:08:59.074 }, 00:08:59.074 "claimed": true, 00:08:59.074 "claim_type": "exclusive_write", 00:08:59.074 "zoned": false, 00:08:59.074 "supported_io_types": { 00:08:59.074 "read": true, 00:08:59.074 "write": true, 00:08:59.074 "unmap": true, 00:08:59.074 "flush": true, 00:08:59.074 "reset": true, 00:08:59.074 "nvme_admin": false, 00:08:59.074 "nvme_io": false, 00:08:59.074 "nvme_io_md": false, 00:08:59.074 "write_zeroes": true, 00:08:59.074 "zcopy": true, 00:08:59.074 "get_zone_info": false, 00:08:59.074 "zone_management": false, 00:08:59.074 "zone_append": false, 00:08:59.074 "compare": false, 00:08:59.074 "compare_and_write": false, 00:08:59.074 "abort": true, 00:08:59.074 "seek_hole": false, 00:08:59.074 "seek_data": false, 00:08:59.074 "copy": true, 00:08:59.074 "nvme_iov_md": false 00:08:59.074 }, 00:08:59.074 "memory_domains": [ 00:08:59.074 { 00:08:59.074 "dma_device_id": "system", 00:08:59.074 "dma_device_type": 1 00:08:59.074 }, 00:08:59.074 { 00:08:59.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.074 "dma_device_type": 2 00:08:59.074 } 00:08:59.074 ], 00:08:59.074 "driver_specific": {} 00:08:59.074 } 00:08:59.074 ] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.074 "name": "Existed_Raid", 00:08:59.074 "uuid": "fceab02f-e2da-4231-b5bf-30246f2a6292", 00:08:59.074 "strip_size_kb": 0, 00:08:59.074 "state": "configuring", 00:08:59.074 "raid_level": "raid1", 00:08:59.074 "superblock": true, 00:08:59.074 "num_base_bdevs": 2, 00:08:59.074 "num_base_bdevs_discovered": 1, 00:08:59.074 "num_base_bdevs_operational": 2, 00:08:59.074 "base_bdevs_list": [ 00:08:59.074 { 00:08:59.074 "name": "BaseBdev1", 00:08:59.074 "uuid": "b376feeb-fe79-4c88-9194-4efc323b2d53", 00:08:59.074 "is_configured": true, 00:08:59.074 "data_offset": 2048, 00:08:59.074 "data_size": 63488 00:08:59.074 }, 00:08:59.074 { 00:08:59.074 "name": "BaseBdev2", 00:08:59.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.074 "is_configured": false, 00:08:59.074 "data_offset": 0, 00:08:59.074 "data_size": 0 00:08:59.074 } 00:08:59.074 ] 00:08:59.074 }' 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.074 03:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 [2024-11-05 03:20:13.059789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.642 [2024-11-05 03:20:13.059861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 [2024-11-05 03:20:13.067866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.642 [2024-11-05 03:20:13.070409] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.642 [2024-11-05 03:20:13.070484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.642 "name": "Existed_Raid", 00:08:59.642 "uuid": "9d329734-0eda-4aa5-8215-c148d202d6ed", 00:08:59.642 "strip_size_kb": 0, 00:08:59.642 "state": "configuring", 00:08:59.642 "raid_level": "raid1", 00:08:59.642 "superblock": true, 00:08:59.642 "num_base_bdevs": 2, 00:08:59.642 "num_base_bdevs_discovered": 1, 00:08:59.642 "num_base_bdevs_operational": 2, 00:08:59.642 "base_bdevs_list": [ 00:08:59.642 { 00:08:59.642 "name": "BaseBdev1", 00:08:59.642 "uuid": "b376feeb-fe79-4c88-9194-4efc323b2d53", 00:08:59.642 "is_configured": true, 00:08:59.642 "data_offset": 2048, 00:08:59.642 "data_size": 63488 00:08:59.642 }, 00:08:59.642 { 00:08:59.642 "name": "BaseBdev2", 00:08:59.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.642 "is_configured": false, 00:08:59.642 "data_offset": 0, 00:08:59.642 "data_size": 0 00:08:59.642 } 00:08:59.642 ] 00:08:59.642 }' 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.642 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.210 [2024-11-05 03:20:13.606960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.210 [2024-11-05 03:20:13.607301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.210 [2024-11-05 03:20:13.607340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:00.210 [2024-11-05 03:20:13.607672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:00.210 BaseBdev2 00:09:00.210 [2024-11-05 03:20:13.607874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.210 [2024-11-05 03:20:13.607901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:00.210 [2024-11-05 03:20:13.608073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.210 [ 00:09:00.210 { 00:09:00.210 "name": "BaseBdev2", 00:09:00.210 "aliases": [ 00:09:00.210 "f73e4af0-ce18-4d95-ad10-acb90a9d0efd" 00:09:00.210 ], 00:09:00.210 "product_name": "Malloc disk", 00:09:00.210 "block_size": 512, 00:09:00.210 "num_blocks": 65536, 00:09:00.210 "uuid": "f73e4af0-ce18-4d95-ad10-acb90a9d0efd", 00:09:00.210 "assigned_rate_limits": { 00:09:00.210 "rw_ios_per_sec": 0, 00:09:00.210 "rw_mbytes_per_sec": 0, 00:09:00.210 "r_mbytes_per_sec": 0, 00:09:00.210 "w_mbytes_per_sec": 0 00:09:00.210 }, 00:09:00.210 "claimed": true, 00:09:00.210 "claim_type": "exclusive_write", 00:09:00.210 "zoned": false, 00:09:00.210 "supported_io_types": { 00:09:00.210 "read": true, 00:09:00.210 "write": true, 00:09:00.210 "unmap": true, 00:09:00.210 "flush": true, 00:09:00.210 "reset": true, 00:09:00.210 "nvme_admin": false, 00:09:00.210 "nvme_io": false, 00:09:00.210 "nvme_io_md": false, 00:09:00.210 "write_zeroes": true, 00:09:00.210 "zcopy": true, 00:09:00.210 "get_zone_info": false, 00:09:00.210 "zone_management": false, 00:09:00.210 "zone_append": false, 00:09:00.210 "compare": false, 00:09:00.210 "compare_and_write": false, 00:09:00.210 "abort": true, 00:09:00.210 "seek_hole": false, 00:09:00.210 "seek_data": false, 00:09:00.210 "copy": true, 00:09:00.210 "nvme_iov_md": false 00:09:00.210 }, 00:09:00.210 "memory_domains": [ 00:09:00.210 { 00:09:00.210 "dma_device_id": "system", 00:09:00.210 "dma_device_type": 1 00:09:00.210 }, 00:09:00.210 { 00:09:00.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.210 "dma_device_type": 2 00:09:00.210 } 00:09:00.210 ], 00:09:00.210 "driver_specific": {} 00:09:00.210 } 00:09:00.210 ] 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.210 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.210 "name": "Existed_Raid", 00:09:00.210 "uuid": "9d329734-0eda-4aa5-8215-c148d202d6ed", 00:09:00.210 "strip_size_kb": 0, 00:09:00.210 "state": "online", 00:09:00.210 "raid_level": "raid1", 00:09:00.210 "superblock": true, 00:09:00.210 "num_base_bdevs": 2, 00:09:00.210 "num_base_bdevs_discovered": 2, 00:09:00.210 "num_base_bdevs_operational": 2, 00:09:00.210 "base_bdevs_list": [ 00:09:00.210 { 00:09:00.210 "name": "BaseBdev1", 00:09:00.210 "uuid": "b376feeb-fe79-4c88-9194-4efc323b2d53", 00:09:00.210 "is_configured": true, 00:09:00.210 "data_offset": 2048, 00:09:00.210 "data_size": 63488 00:09:00.210 }, 00:09:00.210 { 00:09:00.210 "name": "BaseBdev2", 00:09:00.210 "uuid": "f73e4af0-ce18-4d95-ad10-acb90a9d0efd", 00:09:00.210 "is_configured": true, 00:09:00.210 "data_offset": 2048, 00:09:00.211 "data_size": 63488 00:09:00.211 } 00:09:00.211 ] 00:09:00.211 }' 00:09:00.211 03:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.211 03:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.778 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.778 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.778 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.779 [2024-11-05 03:20:14.135553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.779 "name": "Existed_Raid", 00:09:00.779 "aliases": [ 00:09:00.779 "9d329734-0eda-4aa5-8215-c148d202d6ed" 00:09:00.779 ], 00:09:00.779 "product_name": "Raid Volume", 00:09:00.779 "block_size": 512, 00:09:00.779 "num_blocks": 63488, 00:09:00.779 "uuid": "9d329734-0eda-4aa5-8215-c148d202d6ed", 00:09:00.779 "assigned_rate_limits": { 00:09:00.779 "rw_ios_per_sec": 0, 00:09:00.779 "rw_mbytes_per_sec": 0, 00:09:00.779 "r_mbytes_per_sec": 0, 00:09:00.779 "w_mbytes_per_sec": 0 00:09:00.779 }, 00:09:00.779 "claimed": false, 00:09:00.779 "zoned": false, 00:09:00.779 "supported_io_types": { 00:09:00.779 "read": true, 00:09:00.779 "write": true, 00:09:00.779 "unmap": false, 00:09:00.779 "flush": false, 00:09:00.779 "reset": true, 00:09:00.779 "nvme_admin": false, 00:09:00.779 "nvme_io": false, 00:09:00.779 "nvme_io_md": false, 00:09:00.779 "write_zeroes": true, 00:09:00.779 "zcopy": false, 00:09:00.779 "get_zone_info": false, 00:09:00.779 "zone_management": false, 00:09:00.779 "zone_append": false, 00:09:00.779 "compare": false, 00:09:00.779 "compare_and_write": false, 00:09:00.779 "abort": false, 00:09:00.779 "seek_hole": false, 00:09:00.779 "seek_data": false, 00:09:00.779 "copy": false, 00:09:00.779 "nvme_iov_md": false 00:09:00.779 }, 00:09:00.779 "memory_domains": [ 00:09:00.779 { 00:09:00.779 "dma_device_id": "system", 00:09:00.779 "dma_device_type": 1 00:09:00.779 }, 00:09:00.779 { 00:09:00.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.779 "dma_device_type": 2 00:09:00.779 }, 00:09:00.779 { 00:09:00.779 "dma_device_id": "system", 00:09:00.779 "dma_device_type": 1 00:09:00.779 }, 00:09:00.779 { 00:09:00.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.779 "dma_device_type": 2 00:09:00.779 } 00:09:00.779 ], 00:09:00.779 "driver_specific": { 00:09:00.779 "raid": { 00:09:00.779 "uuid": "9d329734-0eda-4aa5-8215-c148d202d6ed", 00:09:00.779 "strip_size_kb": 0, 00:09:00.779 "state": "online", 00:09:00.779 "raid_level": "raid1", 00:09:00.779 "superblock": true, 00:09:00.779 "num_base_bdevs": 2, 00:09:00.779 "num_base_bdevs_discovered": 2, 00:09:00.779 "num_base_bdevs_operational": 2, 00:09:00.779 "base_bdevs_list": [ 00:09:00.779 { 00:09:00.779 "name": "BaseBdev1", 00:09:00.779 "uuid": "b376feeb-fe79-4c88-9194-4efc323b2d53", 00:09:00.779 "is_configured": true, 00:09:00.779 "data_offset": 2048, 00:09:00.779 "data_size": 63488 00:09:00.779 }, 00:09:00.779 { 00:09:00.779 "name": "BaseBdev2", 00:09:00.779 "uuid": "f73e4af0-ce18-4d95-ad10-acb90a9d0efd", 00:09:00.779 "is_configured": true, 00:09:00.779 "data_offset": 2048, 00:09:00.779 "data_size": 63488 00:09:00.779 } 00:09:00.779 ] 00:09:00.779 } 00:09:00.779 } 00:09:00.779 }' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.779 BaseBdev2' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.779 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.779 [2024-11-05 03:20:14.403329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.038 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.039 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.039 "name": "Existed_Raid", 00:09:01.039 "uuid": "9d329734-0eda-4aa5-8215-c148d202d6ed", 00:09:01.039 "strip_size_kb": 0, 00:09:01.039 "state": "online", 00:09:01.039 "raid_level": "raid1", 00:09:01.039 "superblock": true, 00:09:01.039 "num_base_bdevs": 2, 00:09:01.039 "num_base_bdevs_discovered": 1, 00:09:01.039 "num_base_bdevs_operational": 1, 00:09:01.039 "base_bdevs_list": [ 00:09:01.039 { 00:09:01.039 "name": null, 00:09:01.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.039 "is_configured": false, 00:09:01.039 "data_offset": 0, 00:09:01.039 "data_size": 63488 00:09:01.039 }, 00:09:01.039 { 00:09:01.039 "name": "BaseBdev2", 00:09:01.039 "uuid": "f73e4af0-ce18-4d95-ad10-acb90a9d0efd", 00:09:01.039 "is_configured": true, 00:09:01.039 "data_offset": 2048, 00:09:01.039 "data_size": 63488 00:09:01.039 } 00:09:01.039 ] 00:09:01.039 }' 00:09:01.039 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.039 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.625 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.625 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.625 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.625 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.625 03:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.625 03:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.625 [2024-11-05 03:20:15.057739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.625 [2024-11-05 03:20:15.057867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.625 [2024-11-05 03:20:15.133754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.625 [2024-11-05 03:20:15.133832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.625 [2024-11-05 03:20:15.133851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62703 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62703 ']' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62703 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62703 00:09:01.625 killing process with pid 62703 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62703' 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62703 00:09:01.625 [2024-11-05 03:20:15.216204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.625 03:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62703 00:09:01.625 [2024-11-05 03:20:15.231131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.006 03:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:03.006 00:09:03.006 real 0m5.366s 00:09:03.006 user 0m8.104s 00:09:03.006 sys 0m0.778s 00:09:03.006 ************************************ 00:09:03.006 END TEST raid_state_function_test_sb 00:09:03.006 ************************************ 00:09:03.006 03:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.006 03:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.006 03:20:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:03.006 03:20:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:03.006 03:20:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.006 03:20:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.006 ************************************ 00:09:03.006 START TEST raid_superblock_test 00:09:03.006 ************************************ 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62955 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62955 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62955 ']' 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.006 03:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.006 [2024-11-05 03:20:16.399948] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:03.006 [2024-11-05 03:20:16.400464] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62955 ] 00:09:03.006 [2024-11-05 03:20:16.575260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.264 [2024-11-05 03:20:16.703702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.523 [2024-11-05 03:20:16.902989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.523 [2024-11-05 03:20:16.903039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.782 malloc1 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.782 [2024-11-05 03:20:17.381414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.782 [2024-11-05 03:20:17.381496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.782 [2024-11-05 03:20:17.381532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:03.782 [2024-11-05 03:20:17.381547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.782 [2024-11-05 03:20:17.384538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.782 [2024-11-05 03:20:17.384593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.782 pt1 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.782 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:03.783 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.783 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 malloc2 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 [2024-11-05 03:20:17.438181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.042 [2024-11-05 03:20:17.438534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.042 [2024-11-05 03:20:17.438611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:04.042 [2024-11-05 03:20:17.438861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.042 [2024-11-05 03:20:17.441913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.042 [2024-11-05 03:20:17.442080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.042 pt2 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 [2024-11-05 03:20:17.450475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.042 [2024-11-05 03:20:17.453237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.042 [2024-11-05 03:20:17.453654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:04.042 [2024-11-05 03:20:17.453795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:04.042 [2024-11-05 03:20:17.454138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:04.042 [2024-11-05 03:20:17.454415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:04.042 [2024-11-05 03:20:17.454455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:04.042 [2024-11-05 03:20:17.454737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.042 "name": "raid_bdev1", 00:09:04.042 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:04.042 "strip_size_kb": 0, 00:09:04.042 "state": "online", 00:09:04.042 "raid_level": "raid1", 00:09:04.042 "superblock": true, 00:09:04.042 "num_base_bdevs": 2, 00:09:04.042 "num_base_bdevs_discovered": 2, 00:09:04.042 "num_base_bdevs_operational": 2, 00:09:04.042 "base_bdevs_list": [ 00:09:04.042 { 00:09:04.042 "name": "pt1", 00:09:04.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.042 "is_configured": true, 00:09:04.042 "data_offset": 2048, 00:09:04.042 "data_size": 63488 00:09:04.042 }, 00:09:04.042 { 00:09:04.042 "name": "pt2", 00:09:04.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.042 "is_configured": true, 00:09:04.042 "data_offset": 2048, 00:09:04.042 "data_size": 63488 00:09:04.042 } 00:09:04.042 ] 00:09:04.042 }' 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.042 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.610 [2024-11-05 03:20:17.979186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.610 03:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.610 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.610 "name": "raid_bdev1", 00:09:04.610 "aliases": [ 00:09:04.610 "7610184f-ab6b-4459-8a38-a2a3eb183c9f" 00:09:04.610 ], 00:09:04.610 "product_name": "Raid Volume", 00:09:04.610 "block_size": 512, 00:09:04.610 "num_blocks": 63488, 00:09:04.610 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:04.610 "assigned_rate_limits": { 00:09:04.610 "rw_ios_per_sec": 0, 00:09:04.610 "rw_mbytes_per_sec": 0, 00:09:04.610 "r_mbytes_per_sec": 0, 00:09:04.610 "w_mbytes_per_sec": 0 00:09:04.610 }, 00:09:04.610 "claimed": false, 00:09:04.610 "zoned": false, 00:09:04.610 "supported_io_types": { 00:09:04.610 "read": true, 00:09:04.610 "write": true, 00:09:04.610 "unmap": false, 00:09:04.610 "flush": false, 00:09:04.610 "reset": true, 00:09:04.610 "nvme_admin": false, 00:09:04.611 "nvme_io": false, 00:09:04.611 "nvme_io_md": false, 00:09:04.611 "write_zeroes": true, 00:09:04.611 "zcopy": false, 00:09:04.611 "get_zone_info": false, 00:09:04.611 "zone_management": false, 00:09:04.611 "zone_append": false, 00:09:04.611 "compare": false, 00:09:04.611 "compare_and_write": false, 00:09:04.611 "abort": false, 00:09:04.611 "seek_hole": false, 00:09:04.611 "seek_data": false, 00:09:04.611 "copy": false, 00:09:04.611 "nvme_iov_md": false 00:09:04.611 }, 00:09:04.611 "memory_domains": [ 00:09:04.611 { 00:09:04.611 "dma_device_id": "system", 00:09:04.611 "dma_device_type": 1 00:09:04.611 }, 00:09:04.611 { 00:09:04.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.611 "dma_device_type": 2 00:09:04.611 }, 00:09:04.611 { 00:09:04.611 "dma_device_id": "system", 00:09:04.611 "dma_device_type": 1 00:09:04.611 }, 00:09:04.611 { 00:09:04.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.611 "dma_device_type": 2 00:09:04.611 } 00:09:04.611 ], 00:09:04.611 "driver_specific": { 00:09:04.611 "raid": { 00:09:04.611 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:04.611 "strip_size_kb": 0, 00:09:04.611 "state": "online", 00:09:04.611 "raid_level": "raid1", 00:09:04.611 "superblock": true, 00:09:04.611 "num_base_bdevs": 2, 00:09:04.611 "num_base_bdevs_discovered": 2, 00:09:04.611 "num_base_bdevs_operational": 2, 00:09:04.611 "base_bdevs_list": [ 00:09:04.611 { 00:09:04.611 "name": "pt1", 00:09:04.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.611 "is_configured": true, 00:09:04.611 "data_offset": 2048, 00:09:04.611 "data_size": 63488 00:09:04.611 }, 00:09:04.611 { 00:09:04.611 "name": "pt2", 00:09:04.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.611 "is_configured": true, 00:09:04.611 "data_offset": 2048, 00:09:04.611 "data_size": 63488 00:09:04.611 } 00:09:04.611 ] 00:09:04.611 } 00:09:04.611 } 00:09:04.611 }' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:04.611 pt2' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.611 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.611 [2024-11-05 03:20:18.243146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7610184f-ab6b-4459-8a38-a2a3eb183c9f 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7610184f-ab6b-4459-8a38-a2a3eb183c9f ']' 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.870 [2024-11-05 03:20:18.290868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.870 [2024-11-05 03:20:18.291043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.870 [2024-11-05 03:20:18.291164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.870 [2024-11-05 03:20:18.291233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.870 [2024-11-05 03:20:18.291251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.870 [2024-11-05 03:20:18.426975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:04.870 [2024-11-05 03:20:18.429485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:04.870 [2024-11-05 03:20:18.429572] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:04.870 [2024-11-05 03:20:18.429645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:04.870 [2024-11-05 03:20:18.429685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.870 [2024-11-05 03:20:18.429730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:04.870 request: 00:09:04.870 { 00:09:04.870 "name": "raid_bdev1", 00:09:04.870 "raid_level": "raid1", 00:09:04.870 "base_bdevs": [ 00:09:04.870 "malloc1", 00:09:04.870 "malloc2" 00:09:04.870 ], 00:09:04.870 "superblock": false, 00:09:04.870 "method": "bdev_raid_create", 00:09:04.870 "req_id": 1 00:09:04.870 } 00:09:04.870 Got JSON-RPC error response 00:09:04.870 response: 00:09:04.870 { 00:09:04.870 "code": -17, 00:09:04.870 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:04.870 } 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:04.870 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.871 [2024-11-05 03:20:18.494927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.871 [2024-11-05 03:20:18.495122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.871 [2024-11-05 03:20:18.495188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:04.871 [2024-11-05 03:20:18.495383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.871 [2024-11-05 03:20:18.498231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.871 [2024-11-05 03:20:18.498405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.871 [2024-11-05 03:20:18.498609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:04.871 [2024-11-05 03:20:18.498785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.871 pt1 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.871 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.129 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.129 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.129 "name": "raid_bdev1", 00:09:05.129 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:05.129 "strip_size_kb": 0, 00:09:05.129 "state": "configuring", 00:09:05.129 "raid_level": "raid1", 00:09:05.129 "superblock": true, 00:09:05.129 "num_base_bdevs": 2, 00:09:05.129 "num_base_bdevs_discovered": 1, 00:09:05.129 "num_base_bdevs_operational": 2, 00:09:05.129 "base_bdevs_list": [ 00:09:05.129 { 00:09:05.129 "name": "pt1", 00:09:05.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.129 "is_configured": true, 00:09:05.129 "data_offset": 2048, 00:09:05.129 "data_size": 63488 00:09:05.129 }, 00:09:05.129 { 00:09:05.129 "name": null, 00:09:05.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.129 "is_configured": false, 00:09:05.129 "data_offset": 2048, 00:09:05.129 "data_size": 63488 00:09:05.129 } 00:09:05.129 ] 00:09:05.129 }' 00:09:05.129 03:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.129 03:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.697 [2024-11-05 03:20:19.031247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.697 [2024-11-05 03:20:19.031520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.697 [2024-11-05 03:20:19.031564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:05.697 [2024-11-05 03:20:19.031584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.697 [2024-11-05 03:20:19.032168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.697 [2024-11-05 03:20:19.032202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.697 [2024-11-05 03:20:19.032342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.697 [2024-11-05 03:20:19.032398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.697 [2024-11-05 03:20:19.032540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.697 [2024-11-05 03:20:19.032568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:05.697 [2024-11-05 03:20:19.032871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:05.697 [2024-11-05 03:20:19.033059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.697 [2024-11-05 03:20:19.033081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:05.697 [2024-11-05 03:20:19.033245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.697 pt2 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.697 "name": "raid_bdev1", 00:09:05.697 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:05.697 "strip_size_kb": 0, 00:09:05.697 "state": "online", 00:09:05.697 "raid_level": "raid1", 00:09:05.697 "superblock": true, 00:09:05.697 "num_base_bdevs": 2, 00:09:05.697 "num_base_bdevs_discovered": 2, 00:09:05.697 "num_base_bdevs_operational": 2, 00:09:05.697 "base_bdevs_list": [ 00:09:05.697 { 00:09:05.697 "name": "pt1", 00:09:05.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.697 "is_configured": true, 00:09:05.697 "data_offset": 2048, 00:09:05.697 "data_size": 63488 00:09:05.697 }, 00:09:05.697 { 00:09:05.697 "name": "pt2", 00:09:05.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.697 "is_configured": true, 00:09:05.697 "data_offset": 2048, 00:09:05.697 "data_size": 63488 00:09:05.697 } 00:09:05.697 ] 00:09:05.697 }' 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.697 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.956 [2024-11-05 03:20:19.551739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.956 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.218 "name": "raid_bdev1", 00:09:06.218 "aliases": [ 00:09:06.218 "7610184f-ab6b-4459-8a38-a2a3eb183c9f" 00:09:06.218 ], 00:09:06.218 "product_name": "Raid Volume", 00:09:06.218 "block_size": 512, 00:09:06.218 "num_blocks": 63488, 00:09:06.218 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:06.218 "assigned_rate_limits": { 00:09:06.218 "rw_ios_per_sec": 0, 00:09:06.218 "rw_mbytes_per_sec": 0, 00:09:06.218 "r_mbytes_per_sec": 0, 00:09:06.218 "w_mbytes_per_sec": 0 00:09:06.218 }, 00:09:06.218 "claimed": false, 00:09:06.218 "zoned": false, 00:09:06.218 "supported_io_types": { 00:09:06.218 "read": true, 00:09:06.218 "write": true, 00:09:06.218 "unmap": false, 00:09:06.218 "flush": false, 00:09:06.218 "reset": true, 00:09:06.218 "nvme_admin": false, 00:09:06.218 "nvme_io": false, 00:09:06.218 "nvme_io_md": false, 00:09:06.218 "write_zeroes": true, 00:09:06.218 "zcopy": false, 00:09:06.218 "get_zone_info": false, 00:09:06.218 "zone_management": false, 00:09:06.218 "zone_append": false, 00:09:06.218 "compare": false, 00:09:06.218 "compare_and_write": false, 00:09:06.218 "abort": false, 00:09:06.218 "seek_hole": false, 00:09:06.218 "seek_data": false, 00:09:06.218 "copy": false, 00:09:06.218 "nvme_iov_md": false 00:09:06.218 }, 00:09:06.218 "memory_domains": [ 00:09:06.218 { 00:09:06.218 "dma_device_id": "system", 00:09:06.218 "dma_device_type": 1 00:09:06.218 }, 00:09:06.218 { 00:09:06.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.218 "dma_device_type": 2 00:09:06.218 }, 00:09:06.218 { 00:09:06.218 "dma_device_id": "system", 00:09:06.218 "dma_device_type": 1 00:09:06.218 }, 00:09:06.218 { 00:09:06.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.218 "dma_device_type": 2 00:09:06.218 } 00:09:06.218 ], 00:09:06.218 "driver_specific": { 00:09:06.218 "raid": { 00:09:06.218 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:06.218 "strip_size_kb": 0, 00:09:06.218 "state": "online", 00:09:06.218 "raid_level": "raid1", 00:09:06.218 "superblock": true, 00:09:06.218 "num_base_bdevs": 2, 00:09:06.218 "num_base_bdevs_discovered": 2, 00:09:06.218 "num_base_bdevs_operational": 2, 00:09:06.218 "base_bdevs_list": [ 00:09:06.218 { 00:09:06.218 "name": "pt1", 00:09:06.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.218 "is_configured": true, 00:09:06.218 "data_offset": 2048, 00:09:06.218 "data_size": 63488 00:09:06.218 }, 00:09:06.218 { 00:09:06.218 "name": "pt2", 00:09:06.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.218 "is_configured": true, 00:09:06.218 "data_offset": 2048, 00:09:06.218 "data_size": 63488 00:09:06.218 } 00:09:06.218 ] 00:09:06.218 } 00:09:06.218 } 00:09:06.218 }' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:06.218 pt2' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.218 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.219 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.219 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.219 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:06.219 [2024-11-05 03:20:19.815759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.219 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7610184f-ab6b-4459-8a38-a2a3eb183c9f '!=' 7610184f-ab6b-4459-8a38-a2a3eb183c9f ']' 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.477 [2024-11-05 03:20:19.863536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.477 "name": "raid_bdev1", 00:09:06.477 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:06.477 "strip_size_kb": 0, 00:09:06.477 "state": "online", 00:09:06.477 "raid_level": "raid1", 00:09:06.477 "superblock": true, 00:09:06.477 "num_base_bdevs": 2, 00:09:06.477 "num_base_bdevs_discovered": 1, 00:09:06.477 "num_base_bdevs_operational": 1, 00:09:06.477 "base_bdevs_list": [ 00:09:06.477 { 00:09:06.477 "name": null, 00:09:06.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.477 "is_configured": false, 00:09:06.477 "data_offset": 0, 00:09:06.477 "data_size": 63488 00:09:06.477 }, 00:09:06.477 { 00:09:06.477 "name": "pt2", 00:09:06.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.477 "is_configured": true, 00:09:06.477 "data_offset": 2048, 00:09:06.477 "data_size": 63488 00:09:06.477 } 00:09:06.477 ] 00:09:06.477 }' 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.477 03:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.044 [2024-11-05 03:20:20.379742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.044 [2024-11-05 03:20:20.379774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.044 [2024-11-05 03:20:20.379863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.044 [2024-11-05 03:20:20.379950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.044 [2024-11-05 03:20:20.379982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:07.044 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.045 [2024-11-05 03:20:20.451751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.045 [2024-11-05 03:20:20.452001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.045 [2024-11-05 03:20:20.452036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:07.045 [2024-11-05 03:20:20.452054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.045 [2024-11-05 03:20:20.455141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.045 [2024-11-05 03:20:20.455373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.045 [2024-11-05 03:20:20.455481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:07.045 [2024-11-05 03:20:20.455542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.045 [2024-11-05 03:20:20.455668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.045 [2024-11-05 03:20:20.455705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.045 [2024-11-05 03:20:20.456001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:07.045 [2024-11-05 03:20:20.456169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.045 [2024-11-05 03:20:20.456184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:07.045 [2024-11-05 03:20:20.456446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.045 pt2 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.045 "name": "raid_bdev1", 00:09:07.045 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:07.045 "strip_size_kb": 0, 00:09:07.045 "state": "online", 00:09:07.045 "raid_level": "raid1", 00:09:07.045 "superblock": true, 00:09:07.045 "num_base_bdevs": 2, 00:09:07.045 "num_base_bdevs_discovered": 1, 00:09:07.045 "num_base_bdevs_operational": 1, 00:09:07.045 "base_bdevs_list": [ 00:09:07.045 { 00:09:07.045 "name": null, 00:09:07.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.045 "is_configured": false, 00:09:07.045 "data_offset": 2048, 00:09:07.045 "data_size": 63488 00:09:07.045 }, 00:09:07.045 { 00:09:07.045 "name": "pt2", 00:09:07.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.045 "is_configured": true, 00:09:07.045 "data_offset": 2048, 00:09:07.045 "data_size": 63488 00:09:07.045 } 00:09:07.045 ] 00:09:07.045 }' 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.045 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.630 03:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.630 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.630 03:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.630 [2024-11-05 03:20:20.999960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.630 [2024-11-05 03:20:21.000165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.630 [2024-11-05 03:20:21.000270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.631 [2024-11-05 03:20:21.000398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.631 [2024-11-05 03:20:21.000417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.631 [2024-11-05 03:20:21.068007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.631 [2024-11-05 03:20:21.068262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.631 [2024-11-05 03:20:21.068319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:07.631 [2024-11-05 03:20:21.068338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.631 [2024-11-05 03:20:21.071365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.631 [2024-11-05 03:20:21.071534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.631 [2024-11-05 03:20:21.071658] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:07.631 [2024-11-05 03:20:21.071731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.631 [2024-11-05 03:20:21.071923] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:07.631 [2024-11-05 03:20:21.071940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.631 [2024-11-05 03:20:21.071961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:07.631 [2024-11-05 03:20:21.072027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.631 [2024-11-05 03:20:21.072251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:07.631 [2024-11-05 03:20:21.072282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.631 pt1 00:09:07.631 [2024-11-05 03:20:21.072664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:07.631 [2024-11-05 03:20:21.072914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:07.631 [2024-11-05 03:20:21.072941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.631 [2024-11-05 03:20:21.073133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.631 "name": "raid_bdev1", 00:09:07.631 "uuid": "7610184f-ab6b-4459-8a38-a2a3eb183c9f", 00:09:07.631 "strip_size_kb": 0, 00:09:07.631 "state": "online", 00:09:07.631 "raid_level": "raid1", 00:09:07.631 "superblock": true, 00:09:07.631 "num_base_bdevs": 2, 00:09:07.631 "num_base_bdevs_discovered": 1, 00:09:07.631 "num_base_bdevs_operational": 1, 00:09:07.631 "base_bdevs_list": [ 00:09:07.631 { 00:09:07.631 "name": null, 00:09:07.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.631 "is_configured": false, 00:09:07.631 "data_offset": 2048, 00:09:07.631 "data_size": 63488 00:09:07.631 }, 00:09:07.631 { 00:09:07.631 "name": "pt2", 00:09:07.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.631 "is_configured": true, 00:09:07.631 "data_offset": 2048, 00:09:07.631 "data_size": 63488 00:09:07.631 } 00:09:07.631 ] 00:09:07.631 }' 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.631 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.199 [2024-11-05 03:20:21.644497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7610184f-ab6b-4459-8a38-a2a3eb183c9f '!=' 7610184f-ab6b-4459-8a38-a2a3eb183c9f ']' 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62955 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62955 ']' 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62955 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62955 00:09:08.199 killing process with pid 62955 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62955' 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62955 00:09:08.199 [2024-11-05 03:20:21.724619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.199 03:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62955 00:09:08.200 [2024-11-05 03:20:21.724774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.200 [2024-11-05 03:20:21.724829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.200 [2024-11-05 03:20:21.724848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:08.458 [2024-11-05 03:20:21.884450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.394 ************************************ 00:09:09.394 END TEST raid_superblock_test 00:09:09.394 ************************************ 00:09:09.394 03:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:09.394 00:09:09.394 real 0m6.559s 00:09:09.394 user 0m10.504s 00:09:09.394 sys 0m0.898s 00:09:09.394 03:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.394 03:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.394 03:20:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:09.394 03:20:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:09.394 03:20:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.394 03:20:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.394 ************************************ 00:09:09.394 START TEST raid_read_error_test 00:09:09.394 ************************************ 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wzqghGEL50 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63291 00:09:09.394 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63291 00:09:09.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.395 03:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63291 ']' 00:09:09.395 03:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.395 03:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.395 03:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.395 03:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.395 03:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.395 03:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.395 [2024-11-05 03:20:23.006851] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:09.395 [2024-11-05 03:20:23.007003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63291 ] 00:09:09.768 [2024-11-05 03:20:23.178724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.768 [2024-11-05 03:20:23.296890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.046 [2024-11-05 03:20:23.482855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.046 [2024-11-05 03:20:23.482907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.614 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 BaseBdev1_malloc 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 true 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 [2024-11-05 03:20:23.998821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.615 [2024-11-05 03:20:23.999073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.615 [2024-11-05 03:20:23.999113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.615 [2024-11-05 03:20:23.999133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.615 [2024-11-05 03:20:24.002116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.615 [2024-11-05 03:20:24.002430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.615 BaseBdev1 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 BaseBdev2_malloc 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 true 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 [2024-11-05 03:20:24.058991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.615 [2024-11-05 03:20:24.059074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.615 [2024-11-05 03:20:24.059099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:10.615 [2024-11-05 03:20:24.059116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.615 [2024-11-05 03:20:24.061889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.615 [2024-11-05 03:20:24.062115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.615 BaseBdev2 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 [2024-11-05 03:20:24.067109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.615 [2024-11-05 03:20:24.069442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.615 [2024-11-05 03:20:24.069702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.615 [2024-11-05 03:20:24.069724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:10.615 [2024-11-05 03:20:24.070010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:10.615 [2024-11-05 03:20:24.070248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.615 [2024-11-05 03:20:24.070263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:10.615 [2024-11-05 03:20:24.070458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.615 "name": "raid_bdev1", 00:09:10.615 "uuid": "830c854a-b529-4506-9bb8-2402baec0ca7", 00:09:10.615 "strip_size_kb": 0, 00:09:10.615 "state": "online", 00:09:10.615 "raid_level": "raid1", 00:09:10.615 "superblock": true, 00:09:10.615 "num_base_bdevs": 2, 00:09:10.615 "num_base_bdevs_discovered": 2, 00:09:10.615 "num_base_bdevs_operational": 2, 00:09:10.615 "base_bdevs_list": [ 00:09:10.615 { 00:09:10.615 "name": "BaseBdev1", 00:09:10.615 "uuid": "847ad226-7389-5001-b270-2011772027b8", 00:09:10.615 "is_configured": true, 00:09:10.615 "data_offset": 2048, 00:09:10.615 "data_size": 63488 00:09:10.615 }, 00:09:10.615 { 00:09:10.615 "name": "BaseBdev2", 00:09:10.615 "uuid": "13de1c06-68e4-5388-978d-8ff8f16704de", 00:09:10.615 "is_configured": true, 00:09:10.615 "data_offset": 2048, 00:09:10.615 "data_size": 63488 00:09:10.615 } 00:09:10.615 ] 00:09:10.615 }' 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.615 03:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.183 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:11.183 03:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:11.183 [2024-11-05 03:20:24.768536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.119 "name": "raid_bdev1", 00:09:12.119 "uuid": "830c854a-b529-4506-9bb8-2402baec0ca7", 00:09:12.119 "strip_size_kb": 0, 00:09:12.119 "state": "online", 00:09:12.119 "raid_level": "raid1", 00:09:12.119 "superblock": true, 00:09:12.119 "num_base_bdevs": 2, 00:09:12.119 "num_base_bdevs_discovered": 2, 00:09:12.119 "num_base_bdevs_operational": 2, 00:09:12.119 "base_bdevs_list": [ 00:09:12.119 { 00:09:12.119 "name": "BaseBdev1", 00:09:12.119 "uuid": "847ad226-7389-5001-b270-2011772027b8", 00:09:12.119 "is_configured": true, 00:09:12.119 "data_offset": 2048, 00:09:12.119 "data_size": 63488 00:09:12.119 }, 00:09:12.119 { 00:09:12.119 "name": "BaseBdev2", 00:09:12.119 "uuid": "13de1c06-68e4-5388-978d-8ff8f16704de", 00:09:12.119 "is_configured": true, 00:09:12.119 "data_offset": 2048, 00:09:12.119 "data_size": 63488 00:09:12.119 } 00:09:12.119 ] 00:09:12.119 }' 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.119 03:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.686 [2024-11-05 03:20:26.185483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.686 [2024-11-05 03:20:26.185722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.686 [2024-11-05 03:20:26.189146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.686 [2024-11-05 03:20:26.189352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.686 [2024-11-05 03:20:26.189565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.686 [2024-11-05 03:20:26.189595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:12.686 { 00:09:12.686 "results": [ 00:09:12.686 { 00:09:12.686 "job": "raid_bdev1", 00:09:12.686 "core_mask": "0x1", 00:09:12.686 "workload": "randrw", 00:09:12.686 "percentage": 50, 00:09:12.686 "status": "finished", 00:09:12.686 "queue_depth": 1, 00:09:12.686 "io_size": 131072, 00:09:12.686 "runtime": 1.414832, 00:09:12.686 "iops": 13668.053874947696, 00:09:12.686 "mibps": 1708.506734368462, 00:09:12.686 "io_failed": 0, 00:09:12.686 "io_timeout": 0, 00:09:12.686 "avg_latency_us": 69.30858996417793, 00:09:12.686 "min_latency_us": 36.77090909090909, 00:09:12.686 "max_latency_us": 1765.0036363636364 00:09:12.686 } 00:09:12.686 ], 00:09:12.686 "core_count": 1 00:09:12.686 } 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63291 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63291 ']' 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63291 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63291 00:09:12.686 killing process with pid 63291 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63291' 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63291 00:09:12.686 [2024-11-05 03:20:26.230008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.686 03:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63291 00:09:12.945 [2024-11-05 03:20:26.354204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wzqghGEL50 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:13.881 00:09:13.881 real 0m4.470s 00:09:13.881 user 0m5.643s 00:09:13.881 sys 0m0.554s 00:09:13.881 ************************************ 00:09:13.881 END TEST raid_read_error_test 00:09:13.881 ************************************ 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.881 03:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.881 03:20:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:13.881 03:20:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:13.881 03:20:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.881 03:20:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.881 ************************************ 00:09:13.881 START TEST raid_write_error_test 00:09:13.881 ************************************ 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BE8qcIJLjA 00:09:13.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63431 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63431 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63431 ']' 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.881 03:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.140 [2024-11-05 03:20:27.536619] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:14.141 [2024-11-05 03:20:27.536985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63431 ] 00:09:14.141 [2024-11-05 03:20:27.706604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.399 [2024-11-05 03:20:27.838929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.399 [2024-11-05 03:20:28.021964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.399 [2024-11-05 03:20:28.022037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.966 BaseBdev1_malloc 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.966 true 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.966 [2024-11-05 03:20:28.557543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:14.966 [2024-11-05 03:20:28.557631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.966 [2024-11-05 03:20:28.557662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.966 [2024-11-05 03:20:28.557680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.966 [2024-11-05 03:20:28.560646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.966 [2024-11-05 03:20:28.560726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:14.966 BaseBdev1 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.966 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 BaseBdev2_malloc 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 true 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 [2024-11-05 03:20:28.621243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:15.225 [2024-11-05 03:20:28.621346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.225 [2024-11-05 03:20:28.621372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:15.225 [2024-11-05 03:20:28.621388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.225 [2024-11-05 03:20:28.624049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.225 [2024-11-05 03:20:28.624125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:15.225 BaseBdev2 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 [2024-11-05 03:20:28.633298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.225 [2024-11-05 03:20:28.635799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.225 [2024-11-05 03:20:28.636259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.225 [2024-11-05 03:20:28.636289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:15.225 [2024-11-05 03:20:28.636638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:15.225 [2024-11-05 03:20:28.636864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.225 [2024-11-05 03:20:28.636880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:15.225 [2024-11-05 03:20:28.637042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.225 "name": "raid_bdev1", 00:09:15.225 "uuid": "bb424032-d11c-434e-9448-6de89368ded0", 00:09:15.225 "strip_size_kb": 0, 00:09:15.225 "state": "online", 00:09:15.225 "raid_level": "raid1", 00:09:15.225 "superblock": true, 00:09:15.225 "num_base_bdevs": 2, 00:09:15.225 "num_base_bdevs_discovered": 2, 00:09:15.225 "num_base_bdevs_operational": 2, 00:09:15.225 "base_bdevs_list": [ 00:09:15.225 { 00:09:15.225 "name": "BaseBdev1", 00:09:15.225 "uuid": "84d80cd5-1bc7-575b-a8b7-61eba97e15c2", 00:09:15.225 "is_configured": true, 00:09:15.225 "data_offset": 2048, 00:09:15.225 "data_size": 63488 00:09:15.225 }, 00:09:15.225 { 00:09:15.225 "name": "BaseBdev2", 00:09:15.225 "uuid": "ad98d1f4-7283-57eb-a9fc-c99657d78d97", 00:09:15.225 "is_configured": true, 00:09:15.225 "data_offset": 2048, 00:09:15.225 "data_size": 63488 00:09:15.225 } 00:09:15.225 ] 00:09:15.225 }' 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.225 03:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.793 03:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:15.793 03:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:15.793 [2024-11-05 03:20:29.286849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.731 [2024-11-05 03:20:30.166580] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:16.731 [2024-11-05 03:20:30.166663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.731 [2024-11-05 03:20:30.166920] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.731 "name": "raid_bdev1", 00:09:16.731 "uuid": "bb424032-d11c-434e-9448-6de89368ded0", 00:09:16.731 "strip_size_kb": 0, 00:09:16.731 "state": "online", 00:09:16.731 "raid_level": "raid1", 00:09:16.731 "superblock": true, 00:09:16.731 "num_base_bdevs": 2, 00:09:16.731 "num_base_bdevs_discovered": 1, 00:09:16.731 "num_base_bdevs_operational": 1, 00:09:16.731 "base_bdevs_list": [ 00:09:16.731 { 00:09:16.731 "name": null, 00:09:16.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.731 "is_configured": false, 00:09:16.731 "data_offset": 0, 00:09:16.731 "data_size": 63488 00:09:16.731 }, 00:09:16.731 { 00:09:16.731 "name": "BaseBdev2", 00:09:16.731 "uuid": "ad98d1f4-7283-57eb-a9fc-c99657d78d97", 00:09:16.731 "is_configured": true, 00:09:16.731 "data_offset": 2048, 00:09:16.731 "data_size": 63488 00:09:16.731 } 00:09:16.731 ] 00:09:16.731 }' 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.731 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.298 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.298 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.299 [2024-11-05 03:20:30.710690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.299 [2024-11-05 03:20:30.710737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.299 [2024-11-05 03:20:30.714413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.299 [2024-11-05 03:20:30.714613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.299 [2024-11-05 03:20:30.714830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.299 [2024-11-05 03:20:30.714986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:17.299 { 00:09:17.299 "results": [ 00:09:17.299 { 00:09:17.299 "job": "raid_bdev1", 00:09:17.299 "core_mask": "0x1", 00:09:17.299 "workload": "randrw", 00:09:17.299 "percentage": 50, 00:09:17.299 "status": "finished", 00:09:17.299 "queue_depth": 1, 00:09:17.299 "io_size": 131072, 00:09:17.299 "runtime": 1.421501, 00:09:17.299 "iops": 15010.189933035574, 00:09:17.299 "mibps": 1876.2737416294467, 00:09:17.299 "io_failed": 0, 00:09:17.299 "io_timeout": 0, 00:09:17.299 "avg_latency_us": 62.53487727251423, 00:09:17.299 "min_latency_us": 36.07272727272727, 00:09:17.299 "max_latency_us": 1876.7127272727273 00:09:17.299 } 00:09:17.299 ], 00:09:17.299 "core_count": 1 00:09:17.299 } 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63431 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63431 ']' 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63431 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63431 00:09:17.299 killing process with pid 63431 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63431' 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63431 00:09:17.299 [2024-11-05 03:20:30.758726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.299 03:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63431 00:09:17.299 [2024-11-05 03:20:30.888115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BE8qcIJLjA 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:18.675 ************************************ 00:09:18.675 END TEST raid_write_error_test 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:18.675 00:09:18.675 real 0m4.553s 00:09:18.675 user 0m5.712s 00:09:18.675 sys 0m0.554s 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.675 03:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.675 ************************************ 00:09:18.675 03:20:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:18.675 03:20:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:18.675 03:20:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:18.675 03:20:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:18.675 03:20:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.675 03:20:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.675 ************************************ 00:09:18.675 START TEST raid_state_function_test 00:09:18.675 ************************************ 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63580 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63580' 00:09:18.675 Process raid pid: 63580 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63580 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63580 ']' 00:09:18.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:18.675 03:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.675 [2024-11-05 03:20:32.125391] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:18.675 [2024-11-05 03:20:32.125545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.675 [2024-11-05 03:20:32.293586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.933 [2024-11-05 03:20:32.410407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.192 [2024-11-05 03:20:32.594593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.192 [2024-11-05 03:20:32.594636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.451 [2024-11-05 03:20:33.044127] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.451 [2024-11-05 03:20:33.044191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.451 [2024-11-05 03:20:33.044209] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.451 [2024-11-05 03:20:33.044226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.451 [2024-11-05 03:20:33.044237] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.451 [2024-11-05 03:20:33.044252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.451 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.452 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.452 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.452 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.452 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.710 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.710 "name": "Existed_Raid", 00:09:19.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.710 "strip_size_kb": 64, 00:09:19.710 "state": "configuring", 00:09:19.710 "raid_level": "raid0", 00:09:19.710 "superblock": false, 00:09:19.710 "num_base_bdevs": 3, 00:09:19.710 "num_base_bdevs_discovered": 0, 00:09:19.710 "num_base_bdevs_operational": 3, 00:09:19.710 "base_bdevs_list": [ 00:09:19.710 { 00:09:19.710 "name": "BaseBdev1", 00:09:19.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.710 "is_configured": false, 00:09:19.710 "data_offset": 0, 00:09:19.710 "data_size": 0 00:09:19.710 }, 00:09:19.710 { 00:09:19.710 "name": "BaseBdev2", 00:09:19.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.710 "is_configured": false, 00:09:19.710 "data_offset": 0, 00:09:19.710 "data_size": 0 00:09:19.710 }, 00:09:19.710 { 00:09:19.710 "name": "BaseBdev3", 00:09:19.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.710 "is_configured": false, 00:09:19.710 "data_offset": 0, 00:09:19.710 "data_size": 0 00:09:19.710 } 00:09:19.710 ] 00:09:19.710 }' 00:09:19.710 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.710 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.969 [2024-11-05 03:20:33.564752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.969 [2024-11-05 03:20:33.564793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.969 [2024-11-05 03:20:33.576729] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.969 [2024-11-05 03:20:33.576796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.969 [2024-11-05 03:20:33.576811] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.969 [2024-11-05 03:20:33.576825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.969 [2024-11-05 03:20:33.576834] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.969 [2024-11-05 03:20:33.576848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.969 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.228 [2024-11-05 03:20:33.619305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.228 BaseBdev1 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.228 [ 00:09:20.228 { 00:09:20.228 "name": "BaseBdev1", 00:09:20.228 "aliases": [ 00:09:20.228 "3f3692c9-5bf5-4d90-9812-dfaef5e267ea" 00:09:20.228 ], 00:09:20.228 "product_name": "Malloc disk", 00:09:20.228 "block_size": 512, 00:09:20.228 "num_blocks": 65536, 00:09:20.228 "uuid": "3f3692c9-5bf5-4d90-9812-dfaef5e267ea", 00:09:20.228 "assigned_rate_limits": { 00:09:20.228 "rw_ios_per_sec": 0, 00:09:20.228 "rw_mbytes_per_sec": 0, 00:09:20.228 "r_mbytes_per_sec": 0, 00:09:20.228 "w_mbytes_per_sec": 0 00:09:20.228 }, 00:09:20.228 "claimed": true, 00:09:20.228 "claim_type": "exclusive_write", 00:09:20.228 "zoned": false, 00:09:20.228 "supported_io_types": { 00:09:20.228 "read": true, 00:09:20.228 "write": true, 00:09:20.228 "unmap": true, 00:09:20.228 "flush": true, 00:09:20.228 "reset": true, 00:09:20.228 "nvme_admin": false, 00:09:20.228 "nvme_io": false, 00:09:20.228 "nvme_io_md": false, 00:09:20.228 "write_zeroes": true, 00:09:20.228 "zcopy": true, 00:09:20.228 "get_zone_info": false, 00:09:20.228 "zone_management": false, 00:09:20.228 "zone_append": false, 00:09:20.228 "compare": false, 00:09:20.228 "compare_and_write": false, 00:09:20.228 "abort": true, 00:09:20.228 "seek_hole": false, 00:09:20.228 "seek_data": false, 00:09:20.228 "copy": true, 00:09:20.228 "nvme_iov_md": false 00:09:20.228 }, 00:09:20.228 "memory_domains": [ 00:09:20.228 { 00:09:20.228 "dma_device_id": "system", 00:09:20.228 "dma_device_type": 1 00:09:20.228 }, 00:09:20.228 { 00:09:20.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.228 "dma_device_type": 2 00:09:20.228 } 00:09:20.228 ], 00:09:20.228 "driver_specific": {} 00:09:20.228 } 00:09:20.228 ] 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.228 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.228 "name": "Existed_Raid", 00:09:20.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.228 "strip_size_kb": 64, 00:09:20.228 "state": "configuring", 00:09:20.228 "raid_level": "raid0", 00:09:20.228 "superblock": false, 00:09:20.228 "num_base_bdevs": 3, 00:09:20.228 "num_base_bdevs_discovered": 1, 00:09:20.228 "num_base_bdevs_operational": 3, 00:09:20.228 "base_bdevs_list": [ 00:09:20.228 { 00:09:20.228 "name": "BaseBdev1", 00:09:20.228 "uuid": "3f3692c9-5bf5-4d90-9812-dfaef5e267ea", 00:09:20.228 "is_configured": true, 00:09:20.228 "data_offset": 0, 00:09:20.228 "data_size": 65536 00:09:20.228 }, 00:09:20.228 { 00:09:20.228 "name": "BaseBdev2", 00:09:20.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.228 "is_configured": false, 00:09:20.228 "data_offset": 0, 00:09:20.228 "data_size": 0 00:09:20.228 }, 00:09:20.228 { 00:09:20.228 "name": "BaseBdev3", 00:09:20.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.228 "is_configured": false, 00:09:20.228 "data_offset": 0, 00:09:20.228 "data_size": 0 00:09:20.228 } 00:09:20.228 ] 00:09:20.228 }' 00:09:20.229 03:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.229 03:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.796 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.796 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 [2024-11-05 03:20:34.227576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.796 [2024-11-05 03:20:34.227658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.796 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.796 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.796 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.796 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 [2024-11-05 03:20:34.235618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.796 [2024-11-05 03:20:34.238084] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.797 [2024-11-05 03:20:34.238139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.797 [2024-11-05 03:20:34.238155] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.797 [2024-11-05 03:20:34.238183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.797 "name": "Existed_Raid", 00:09:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.797 "strip_size_kb": 64, 00:09:20.797 "state": "configuring", 00:09:20.797 "raid_level": "raid0", 00:09:20.797 "superblock": false, 00:09:20.797 "num_base_bdevs": 3, 00:09:20.797 "num_base_bdevs_discovered": 1, 00:09:20.797 "num_base_bdevs_operational": 3, 00:09:20.797 "base_bdevs_list": [ 00:09:20.797 { 00:09:20.797 "name": "BaseBdev1", 00:09:20.797 "uuid": "3f3692c9-5bf5-4d90-9812-dfaef5e267ea", 00:09:20.797 "is_configured": true, 00:09:20.797 "data_offset": 0, 00:09:20.797 "data_size": 65536 00:09:20.797 }, 00:09:20.797 { 00:09:20.797 "name": "BaseBdev2", 00:09:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.797 "is_configured": false, 00:09:20.797 "data_offset": 0, 00:09:20.797 "data_size": 0 00:09:20.797 }, 00:09:20.797 { 00:09:20.797 "name": "BaseBdev3", 00:09:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.797 "is_configured": false, 00:09:20.797 "data_offset": 0, 00:09:20.797 "data_size": 0 00:09:20.797 } 00:09:20.797 ] 00:09:20.797 }' 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.797 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.379 [2024-11-05 03:20:34.820552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.379 BaseBdev2 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.379 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.379 [ 00:09:21.379 { 00:09:21.379 "name": "BaseBdev2", 00:09:21.379 "aliases": [ 00:09:21.380 "cd58806a-d68d-46cb-a6b4-cedb1cc1d756" 00:09:21.380 ], 00:09:21.380 "product_name": "Malloc disk", 00:09:21.380 "block_size": 512, 00:09:21.380 "num_blocks": 65536, 00:09:21.380 "uuid": "cd58806a-d68d-46cb-a6b4-cedb1cc1d756", 00:09:21.380 "assigned_rate_limits": { 00:09:21.380 "rw_ios_per_sec": 0, 00:09:21.380 "rw_mbytes_per_sec": 0, 00:09:21.380 "r_mbytes_per_sec": 0, 00:09:21.380 "w_mbytes_per_sec": 0 00:09:21.380 }, 00:09:21.380 "claimed": true, 00:09:21.380 "claim_type": "exclusive_write", 00:09:21.380 "zoned": false, 00:09:21.380 "supported_io_types": { 00:09:21.380 "read": true, 00:09:21.380 "write": true, 00:09:21.380 "unmap": true, 00:09:21.380 "flush": true, 00:09:21.380 "reset": true, 00:09:21.380 "nvme_admin": false, 00:09:21.380 "nvme_io": false, 00:09:21.380 "nvme_io_md": false, 00:09:21.380 "write_zeroes": true, 00:09:21.380 "zcopy": true, 00:09:21.380 "get_zone_info": false, 00:09:21.380 "zone_management": false, 00:09:21.380 "zone_append": false, 00:09:21.380 "compare": false, 00:09:21.380 "compare_and_write": false, 00:09:21.380 "abort": true, 00:09:21.380 "seek_hole": false, 00:09:21.380 "seek_data": false, 00:09:21.380 "copy": true, 00:09:21.380 "nvme_iov_md": false 00:09:21.380 }, 00:09:21.380 "memory_domains": [ 00:09:21.380 { 00:09:21.380 "dma_device_id": "system", 00:09:21.380 "dma_device_type": 1 00:09:21.380 }, 00:09:21.380 { 00:09:21.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.380 "dma_device_type": 2 00:09:21.380 } 00:09:21.380 ], 00:09:21.380 "driver_specific": {} 00:09:21.380 } 00:09:21.380 ] 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.380 "name": "Existed_Raid", 00:09:21.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.380 "strip_size_kb": 64, 00:09:21.380 "state": "configuring", 00:09:21.380 "raid_level": "raid0", 00:09:21.380 "superblock": false, 00:09:21.380 "num_base_bdevs": 3, 00:09:21.380 "num_base_bdevs_discovered": 2, 00:09:21.380 "num_base_bdevs_operational": 3, 00:09:21.380 "base_bdevs_list": [ 00:09:21.380 { 00:09:21.380 "name": "BaseBdev1", 00:09:21.380 "uuid": "3f3692c9-5bf5-4d90-9812-dfaef5e267ea", 00:09:21.380 "is_configured": true, 00:09:21.380 "data_offset": 0, 00:09:21.380 "data_size": 65536 00:09:21.380 }, 00:09:21.380 { 00:09:21.380 "name": "BaseBdev2", 00:09:21.380 "uuid": "cd58806a-d68d-46cb-a6b4-cedb1cc1d756", 00:09:21.380 "is_configured": true, 00:09:21.380 "data_offset": 0, 00:09:21.380 "data_size": 65536 00:09:21.380 }, 00:09:21.380 { 00:09:21.380 "name": "BaseBdev3", 00:09:21.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.380 "is_configured": false, 00:09:21.380 "data_offset": 0, 00:09:21.380 "data_size": 0 00:09:21.380 } 00:09:21.380 ] 00:09:21.380 }' 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.380 03:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.971 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.972 [2024-11-05 03:20:35.452800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.972 [2024-11-05 03:20:35.452847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.972 [2024-11-05 03:20:35.452866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:21.972 [2024-11-05 03:20:35.453181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.972 [2024-11-05 03:20:35.453408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.972 [2024-11-05 03:20:35.453423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.972 [2024-11-05 03:20:35.453816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.972 BaseBdev3 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.972 [ 00:09:21.972 { 00:09:21.972 "name": "BaseBdev3", 00:09:21.972 "aliases": [ 00:09:21.972 "00951eb6-1481-4431-93dd-78aa2458dca2" 00:09:21.972 ], 00:09:21.972 "product_name": "Malloc disk", 00:09:21.972 "block_size": 512, 00:09:21.972 "num_blocks": 65536, 00:09:21.972 "uuid": "00951eb6-1481-4431-93dd-78aa2458dca2", 00:09:21.972 "assigned_rate_limits": { 00:09:21.972 "rw_ios_per_sec": 0, 00:09:21.972 "rw_mbytes_per_sec": 0, 00:09:21.972 "r_mbytes_per_sec": 0, 00:09:21.972 "w_mbytes_per_sec": 0 00:09:21.972 }, 00:09:21.972 "claimed": true, 00:09:21.972 "claim_type": "exclusive_write", 00:09:21.972 "zoned": false, 00:09:21.972 "supported_io_types": { 00:09:21.972 "read": true, 00:09:21.972 "write": true, 00:09:21.972 "unmap": true, 00:09:21.972 "flush": true, 00:09:21.972 "reset": true, 00:09:21.972 "nvme_admin": false, 00:09:21.972 "nvme_io": false, 00:09:21.972 "nvme_io_md": false, 00:09:21.972 "write_zeroes": true, 00:09:21.972 "zcopy": true, 00:09:21.972 "get_zone_info": false, 00:09:21.972 "zone_management": false, 00:09:21.972 "zone_append": false, 00:09:21.972 "compare": false, 00:09:21.972 "compare_and_write": false, 00:09:21.972 "abort": true, 00:09:21.972 "seek_hole": false, 00:09:21.972 "seek_data": false, 00:09:21.972 "copy": true, 00:09:21.972 "nvme_iov_md": false 00:09:21.972 }, 00:09:21.972 "memory_domains": [ 00:09:21.972 { 00:09:21.972 "dma_device_id": "system", 00:09:21.972 "dma_device_type": 1 00:09:21.972 }, 00:09:21.972 { 00:09:21.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.972 "dma_device_type": 2 00:09:21.972 } 00:09:21.972 ], 00:09:21.972 "driver_specific": {} 00:09:21.972 } 00:09:21.972 ] 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.972 "name": "Existed_Raid", 00:09:21.972 "uuid": "e86b338e-596f-4eab-9db0-19af12b55a4e", 00:09:21.972 "strip_size_kb": 64, 00:09:21.972 "state": "online", 00:09:21.972 "raid_level": "raid0", 00:09:21.972 "superblock": false, 00:09:21.972 "num_base_bdevs": 3, 00:09:21.972 "num_base_bdevs_discovered": 3, 00:09:21.972 "num_base_bdevs_operational": 3, 00:09:21.972 "base_bdevs_list": [ 00:09:21.972 { 00:09:21.972 "name": "BaseBdev1", 00:09:21.972 "uuid": "3f3692c9-5bf5-4d90-9812-dfaef5e267ea", 00:09:21.972 "is_configured": true, 00:09:21.972 "data_offset": 0, 00:09:21.972 "data_size": 65536 00:09:21.972 }, 00:09:21.972 { 00:09:21.972 "name": "BaseBdev2", 00:09:21.972 "uuid": "cd58806a-d68d-46cb-a6b4-cedb1cc1d756", 00:09:21.972 "is_configured": true, 00:09:21.972 "data_offset": 0, 00:09:21.972 "data_size": 65536 00:09:21.972 }, 00:09:21.972 { 00:09:21.972 "name": "BaseBdev3", 00:09:21.972 "uuid": "00951eb6-1481-4431-93dd-78aa2458dca2", 00:09:21.972 "is_configured": true, 00:09:21.972 "data_offset": 0, 00:09:21.972 "data_size": 65536 00:09:21.972 } 00:09:21.972 ] 00:09:21.972 }' 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.972 03:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.541 [2024-11-05 03:20:36.033429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.541 "name": "Existed_Raid", 00:09:22.541 "aliases": [ 00:09:22.541 "e86b338e-596f-4eab-9db0-19af12b55a4e" 00:09:22.541 ], 00:09:22.541 "product_name": "Raid Volume", 00:09:22.541 "block_size": 512, 00:09:22.541 "num_blocks": 196608, 00:09:22.541 "uuid": "e86b338e-596f-4eab-9db0-19af12b55a4e", 00:09:22.541 "assigned_rate_limits": { 00:09:22.541 "rw_ios_per_sec": 0, 00:09:22.541 "rw_mbytes_per_sec": 0, 00:09:22.541 "r_mbytes_per_sec": 0, 00:09:22.541 "w_mbytes_per_sec": 0 00:09:22.541 }, 00:09:22.541 "claimed": false, 00:09:22.541 "zoned": false, 00:09:22.541 "supported_io_types": { 00:09:22.541 "read": true, 00:09:22.541 "write": true, 00:09:22.541 "unmap": true, 00:09:22.541 "flush": true, 00:09:22.541 "reset": true, 00:09:22.541 "nvme_admin": false, 00:09:22.541 "nvme_io": false, 00:09:22.541 "nvme_io_md": false, 00:09:22.541 "write_zeroes": true, 00:09:22.541 "zcopy": false, 00:09:22.541 "get_zone_info": false, 00:09:22.541 "zone_management": false, 00:09:22.541 "zone_append": false, 00:09:22.541 "compare": false, 00:09:22.541 "compare_and_write": false, 00:09:22.541 "abort": false, 00:09:22.541 "seek_hole": false, 00:09:22.541 "seek_data": false, 00:09:22.541 "copy": false, 00:09:22.541 "nvme_iov_md": false 00:09:22.541 }, 00:09:22.541 "memory_domains": [ 00:09:22.541 { 00:09:22.541 "dma_device_id": "system", 00:09:22.541 "dma_device_type": 1 00:09:22.541 }, 00:09:22.541 { 00:09:22.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.541 "dma_device_type": 2 00:09:22.541 }, 00:09:22.541 { 00:09:22.541 "dma_device_id": "system", 00:09:22.541 "dma_device_type": 1 00:09:22.541 }, 00:09:22.541 { 00:09:22.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.541 "dma_device_type": 2 00:09:22.541 }, 00:09:22.541 { 00:09:22.541 "dma_device_id": "system", 00:09:22.541 "dma_device_type": 1 00:09:22.541 }, 00:09:22.541 { 00:09:22.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.541 "dma_device_type": 2 00:09:22.541 } 00:09:22.541 ], 00:09:22.541 "driver_specific": { 00:09:22.541 "raid": { 00:09:22.541 "uuid": "e86b338e-596f-4eab-9db0-19af12b55a4e", 00:09:22.541 "strip_size_kb": 64, 00:09:22.541 "state": "online", 00:09:22.541 "raid_level": "raid0", 00:09:22.541 "superblock": false, 00:09:22.541 "num_base_bdevs": 3, 00:09:22.541 "num_base_bdevs_discovered": 3, 00:09:22.541 "num_base_bdevs_operational": 3, 00:09:22.541 "base_bdevs_list": [ 00:09:22.541 { 00:09:22.541 "name": "BaseBdev1", 00:09:22.541 "uuid": "3f3692c9-5bf5-4d90-9812-dfaef5e267ea", 00:09:22.541 "is_configured": true, 00:09:22.541 "data_offset": 0, 00:09:22.541 "data_size": 65536 00:09:22.541 }, 00:09:22.541 { 00:09:22.541 "name": "BaseBdev2", 00:09:22.541 "uuid": "cd58806a-d68d-46cb-a6b4-cedb1cc1d756", 00:09:22.541 "is_configured": true, 00:09:22.541 "data_offset": 0, 00:09:22.541 "data_size": 65536 00:09:22.541 }, 00:09:22.541 { 00:09:22.541 "name": "BaseBdev3", 00:09:22.541 "uuid": "00951eb6-1481-4431-93dd-78aa2458dca2", 00:09:22.541 "is_configured": true, 00:09:22.541 "data_offset": 0, 00:09:22.541 "data_size": 65536 00:09:22.541 } 00:09:22.541 ] 00:09:22.541 } 00:09:22.541 } 00:09:22.541 }' 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:22.541 BaseBdev2 00:09:22.541 BaseBdev3' 00:09:22.541 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.801 [2024-11-05 03:20:36.357164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.801 [2024-11-05 03:20:36.357375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.801 [2024-11-05 03:20:36.357465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.801 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.802 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.802 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.802 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.802 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.061 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.061 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.061 "name": "Existed_Raid", 00:09:23.061 "uuid": "e86b338e-596f-4eab-9db0-19af12b55a4e", 00:09:23.061 "strip_size_kb": 64, 00:09:23.061 "state": "offline", 00:09:23.061 "raid_level": "raid0", 00:09:23.061 "superblock": false, 00:09:23.061 "num_base_bdevs": 3, 00:09:23.061 "num_base_bdevs_discovered": 2, 00:09:23.061 "num_base_bdevs_operational": 2, 00:09:23.061 "base_bdevs_list": [ 00:09:23.061 { 00:09:23.061 "name": null, 00:09:23.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.061 "is_configured": false, 00:09:23.061 "data_offset": 0, 00:09:23.061 "data_size": 65536 00:09:23.061 }, 00:09:23.061 { 00:09:23.061 "name": "BaseBdev2", 00:09:23.061 "uuid": "cd58806a-d68d-46cb-a6b4-cedb1cc1d756", 00:09:23.061 "is_configured": true, 00:09:23.061 "data_offset": 0, 00:09:23.061 "data_size": 65536 00:09:23.061 }, 00:09:23.061 { 00:09:23.061 "name": "BaseBdev3", 00:09:23.061 "uuid": "00951eb6-1481-4431-93dd-78aa2458dca2", 00:09:23.061 "is_configured": true, 00:09:23.061 "data_offset": 0, 00:09:23.061 "data_size": 65536 00:09:23.061 } 00:09:23.061 ] 00:09:23.061 }' 00:09:23.061 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.061 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.629 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:23.629 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.629 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.630 03:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.630 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.630 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.630 03:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.630 [2024-11-05 03:20:37.026620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.630 [2024-11-05 03:20:37.169419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.630 [2024-11-05 03:20:37.169495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.630 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 BaseBdev2 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 [ 00:09:23.890 { 00:09:23.890 "name": "BaseBdev2", 00:09:23.890 "aliases": [ 00:09:23.890 "5e40131d-3fc0-4009-9111-411b33d154dd" 00:09:23.890 ], 00:09:23.890 "product_name": "Malloc disk", 00:09:23.890 "block_size": 512, 00:09:23.890 "num_blocks": 65536, 00:09:23.890 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:23.890 "assigned_rate_limits": { 00:09:23.890 "rw_ios_per_sec": 0, 00:09:23.890 "rw_mbytes_per_sec": 0, 00:09:23.890 "r_mbytes_per_sec": 0, 00:09:23.890 "w_mbytes_per_sec": 0 00:09:23.890 }, 00:09:23.890 "claimed": false, 00:09:23.890 "zoned": false, 00:09:23.890 "supported_io_types": { 00:09:23.890 "read": true, 00:09:23.890 "write": true, 00:09:23.890 "unmap": true, 00:09:23.890 "flush": true, 00:09:23.890 "reset": true, 00:09:23.890 "nvme_admin": false, 00:09:23.890 "nvme_io": false, 00:09:23.890 "nvme_io_md": false, 00:09:23.890 "write_zeroes": true, 00:09:23.890 "zcopy": true, 00:09:23.890 "get_zone_info": false, 00:09:23.890 "zone_management": false, 00:09:23.890 "zone_append": false, 00:09:23.890 "compare": false, 00:09:23.890 "compare_and_write": false, 00:09:23.890 "abort": true, 00:09:23.890 "seek_hole": false, 00:09:23.890 "seek_data": false, 00:09:23.890 "copy": true, 00:09:23.890 "nvme_iov_md": false 00:09:23.890 }, 00:09:23.890 "memory_domains": [ 00:09:23.890 { 00:09:23.890 "dma_device_id": "system", 00:09:23.890 "dma_device_type": 1 00:09:23.890 }, 00:09:23.890 { 00:09:23.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.890 "dma_device_type": 2 00:09:23.890 } 00:09:23.890 ], 00:09:23.890 "driver_specific": {} 00:09:23.890 } 00:09:23.890 ] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 BaseBdev3 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 [ 00:09:23.890 { 00:09:23.890 "name": "BaseBdev3", 00:09:23.890 "aliases": [ 00:09:23.890 "2a9b6414-64b9-47bd-abe5-7e02992aea57" 00:09:23.890 ], 00:09:23.890 "product_name": "Malloc disk", 00:09:23.890 "block_size": 512, 00:09:23.890 "num_blocks": 65536, 00:09:23.890 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:23.890 "assigned_rate_limits": { 00:09:23.890 "rw_ios_per_sec": 0, 00:09:23.890 "rw_mbytes_per_sec": 0, 00:09:23.890 "r_mbytes_per_sec": 0, 00:09:23.890 "w_mbytes_per_sec": 0 00:09:23.890 }, 00:09:23.890 "claimed": false, 00:09:23.890 "zoned": false, 00:09:23.890 "supported_io_types": { 00:09:23.890 "read": true, 00:09:23.890 "write": true, 00:09:23.890 "unmap": true, 00:09:23.890 "flush": true, 00:09:23.890 "reset": true, 00:09:23.890 "nvme_admin": false, 00:09:23.890 "nvme_io": false, 00:09:23.890 "nvme_io_md": false, 00:09:23.890 "write_zeroes": true, 00:09:23.890 "zcopy": true, 00:09:23.890 "get_zone_info": false, 00:09:23.890 "zone_management": false, 00:09:23.890 "zone_append": false, 00:09:23.890 "compare": false, 00:09:23.890 "compare_and_write": false, 00:09:23.890 "abort": true, 00:09:23.890 "seek_hole": false, 00:09:23.890 "seek_data": false, 00:09:23.890 "copy": true, 00:09:23.890 "nvme_iov_md": false 00:09:23.890 }, 00:09:23.890 "memory_domains": [ 00:09:23.890 { 00:09:23.890 "dma_device_id": "system", 00:09:23.890 "dma_device_type": 1 00:09:23.890 }, 00:09:23.890 { 00:09:23.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.890 "dma_device_type": 2 00:09:23.890 } 00:09:23.890 ], 00:09:23.890 "driver_specific": {} 00:09:23.890 } 00:09:23.890 ] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.890 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 [2024-11-05 03:20:37.455699] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.890 [2024-11-05 03:20:37.455752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.891 [2024-11-05 03:20:37.455799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.891 [2024-11-05 03:20:37.458292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.891 "name": "Existed_Raid", 00:09:23.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.891 "strip_size_kb": 64, 00:09:23.891 "state": "configuring", 00:09:23.891 "raid_level": "raid0", 00:09:23.891 "superblock": false, 00:09:23.891 "num_base_bdevs": 3, 00:09:23.891 "num_base_bdevs_discovered": 2, 00:09:23.891 "num_base_bdevs_operational": 3, 00:09:23.891 "base_bdevs_list": [ 00:09:23.891 { 00:09:23.891 "name": "BaseBdev1", 00:09:23.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.891 "is_configured": false, 00:09:23.891 "data_offset": 0, 00:09:23.891 "data_size": 0 00:09:23.891 }, 00:09:23.891 { 00:09:23.891 "name": "BaseBdev2", 00:09:23.891 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:23.891 "is_configured": true, 00:09:23.891 "data_offset": 0, 00:09:23.891 "data_size": 65536 00:09:23.891 }, 00:09:23.891 { 00:09:23.891 "name": "BaseBdev3", 00:09:23.891 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:23.891 "is_configured": true, 00:09:23.891 "data_offset": 0, 00:09:23.891 "data_size": 65536 00:09:23.891 } 00:09:23.891 ] 00:09:23.891 }' 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.891 03:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.460 [2024-11-05 03:20:38.011846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.460 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.460 "name": "Existed_Raid", 00:09:24.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.460 "strip_size_kb": 64, 00:09:24.460 "state": "configuring", 00:09:24.460 "raid_level": "raid0", 00:09:24.460 "superblock": false, 00:09:24.460 "num_base_bdevs": 3, 00:09:24.460 "num_base_bdevs_discovered": 1, 00:09:24.460 "num_base_bdevs_operational": 3, 00:09:24.460 "base_bdevs_list": [ 00:09:24.460 { 00:09:24.460 "name": "BaseBdev1", 00:09:24.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.460 "is_configured": false, 00:09:24.460 "data_offset": 0, 00:09:24.460 "data_size": 0 00:09:24.460 }, 00:09:24.460 { 00:09:24.460 "name": null, 00:09:24.460 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:24.460 "is_configured": false, 00:09:24.460 "data_offset": 0, 00:09:24.460 "data_size": 65536 00:09:24.460 }, 00:09:24.460 { 00:09:24.460 "name": "BaseBdev3", 00:09:24.460 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:24.460 "is_configured": true, 00:09:24.460 "data_offset": 0, 00:09:24.460 "data_size": 65536 00:09:24.461 } 00:09:24.461 ] 00:09:24.461 }' 00:09:24.461 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.461 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.028 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.287 [2024-11-05 03:20:38.673136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.287 BaseBdev1 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.287 [ 00:09:25.287 { 00:09:25.287 "name": "BaseBdev1", 00:09:25.287 "aliases": [ 00:09:25.287 "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5" 00:09:25.287 ], 00:09:25.287 "product_name": "Malloc disk", 00:09:25.287 "block_size": 512, 00:09:25.287 "num_blocks": 65536, 00:09:25.287 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:25.287 "assigned_rate_limits": { 00:09:25.287 "rw_ios_per_sec": 0, 00:09:25.287 "rw_mbytes_per_sec": 0, 00:09:25.287 "r_mbytes_per_sec": 0, 00:09:25.287 "w_mbytes_per_sec": 0 00:09:25.287 }, 00:09:25.287 "claimed": true, 00:09:25.287 "claim_type": "exclusive_write", 00:09:25.287 "zoned": false, 00:09:25.287 "supported_io_types": { 00:09:25.287 "read": true, 00:09:25.287 "write": true, 00:09:25.287 "unmap": true, 00:09:25.287 "flush": true, 00:09:25.287 "reset": true, 00:09:25.287 "nvme_admin": false, 00:09:25.287 "nvme_io": false, 00:09:25.287 "nvme_io_md": false, 00:09:25.287 "write_zeroes": true, 00:09:25.287 "zcopy": true, 00:09:25.287 "get_zone_info": false, 00:09:25.287 "zone_management": false, 00:09:25.287 "zone_append": false, 00:09:25.287 "compare": false, 00:09:25.287 "compare_and_write": false, 00:09:25.287 "abort": true, 00:09:25.287 "seek_hole": false, 00:09:25.287 "seek_data": false, 00:09:25.287 "copy": true, 00:09:25.287 "nvme_iov_md": false 00:09:25.287 }, 00:09:25.287 "memory_domains": [ 00:09:25.287 { 00:09:25.287 "dma_device_id": "system", 00:09:25.287 "dma_device_type": 1 00:09:25.287 }, 00:09:25.287 { 00:09:25.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.287 "dma_device_type": 2 00:09:25.287 } 00:09:25.287 ], 00:09:25.287 "driver_specific": {} 00:09:25.287 } 00:09:25.287 ] 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.287 "name": "Existed_Raid", 00:09:25.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.287 "strip_size_kb": 64, 00:09:25.287 "state": "configuring", 00:09:25.287 "raid_level": "raid0", 00:09:25.287 "superblock": false, 00:09:25.287 "num_base_bdevs": 3, 00:09:25.287 "num_base_bdevs_discovered": 2, 00:09:25.287 "num_base_bdevs_operational": 3, 00:09:25.287 "base_bdevs_list": [ 00:09:25.287 { 00:09:25.287 "name": "BaseBdev1", 00:09:25.287 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:25.287 "is_configured": true, 00:09:25.287 "data_offset": 0, 00:09:25.287 "data_size": 65536 00:09:25.287 }, 00:09:25.287 { 00:09:25.287 "name": null, 00:09:25.287 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:25.287 "is_configured": false, 00:09:25.287 "data_offset": 0, 00:09:25.287 "data_size": 65536 00:09:25.287 }, 00:09:25.287 { 00:09:25.287 "name": "BaseBdev3", 00:09:25.287 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:25.287 "is_configured": true, 00:09:25.287 "data_offset": 0, 00:09:25.287 "data_size": 65536 00:09:25.287 } 00:09:25.287 ] 00:09:25.287 }' 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.287 03:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.855 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.855 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.856 [2024-11-05 03:20:39.297351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.856 "name": "Existed_Raid", 00:09:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.856 "strip_size_kb": 64, 00:09:25.856 "state": "configuring", 00:09:25.856 "raid_level": "raid0", 00:09:25.856 "superblock": false, 00:09:25.856 "num_base_bdevs": 3, 00:09:25.856 "num_base_bdevs_discovered": 1, 00:09:25.856 "num_base_bdevs_operational": 3, 00:09:25.856 "base_bdevs_list": [ 00:09:25.856 { 00:09:25.856 "name": "BaseBdev1", 00:09:25.856 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:25.856 "is_configured": true, 00:09:25.856 "data_offset": 0, 00:09:25.856 "data_size": 65536 00:09:25.856 }, 00:09:25.856 { 00:09:25.856 "name": null, 00:09:25.856 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:25.856 "is_configured": false, 00:09:25.856 "data_offset": 0, 00:09:25.856 "data_size": 65536 00:09:25.856 }, 00:09:25.856 { 00:09:25.856 "name": null, 00:09:25.856 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:25.856 "is_configured": false, 00:09:25.856 "data_offset": 0, 00:09:25.856 "data_size": 65536 00:09:25.856 } 00:09:25.856 ] 00:09:25.856 }' 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.856 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.424 [2024-11-05 03:20:39.921629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.424 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.425 "name": "Existed_Raid", 00:09:26.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.425 "strip_size_kb": 64, 00:09:26.425 "state": "configuring", 00:09:26.425 "raid_level": "raid0", 00:09:26.425 "superblock": false, 00:09:26.425 "num_base_bdevs": 3, 00:09:26.425 "num_base_bdevs_discovered": 2, 00:09:26.425 "num_base_bdevs_operational": 3, 00:09:26.425 "base_bdevs_list": [ 00:09:26.425 { 00:09:26.425 "name": "BaseBdev1", 00:09:26.425 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:26.425 "is_configured": true, 00:09:26.425 "data_offset": 0, 00:09:26.425 "data_size": 65536 00:09:26.425 }, 00:09:26.425 { 00:09:26.425 "name": null, 00:09:26.425 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:26.425 "is_configured": false, 00:09:26.425 "data_offset": 0, 00:09:26.425 "data_size": 65536 00:09:26.425 }, 00:09:26.425 { 00:09:26.425 "name": "BaseBdev3", 00:09:26.425 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:26.425 "is_configured": true, 00:09:26.425 "data_offset": 0, 00:09:26.425 "data_size": 65536 00:09:26.425 } 00:09:26.425 ] 00:09:26.425 }' 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.425 03:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 [2024-11-05 03:20:40.518216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.992 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.251 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.251 "name": "Existed_Raid", 00:09:27.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.251 "strip_size_kb": 64, 00:09:27.251 "state": "configuring", 00:09:27.251 "raid_level": "raid0", 00:09:27.251 "superblock": false, 00:09:27.251 "num_base_bdevs": 3, 00:09:27.251 "num_base_bdevs_discovered": 1, 00:09:27.251 "num_base_bdevs_operational": 3, 00:09:27.251 "base_bdevs_list": [ 00:09:27.251 { 00:09:27.251 "name": null, 00:09:27.251 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:27.251 "is_configured": false, 00:09:27.251 "data_offset": 0, 00:09:27.251 "data_size": 65536 00:09:27.251 }, 00:09:27.251 { 00:09:27.251 "name": null, 00:09:27.251 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:27.251 "is_configured": false, 00:09:27.251 "data_offset": 0, 00:09:27.251 "data_size": 65536 00:09:27.251 }, 00:09:27.251 { 00:09:27.251 "name": "BaseBdev3", 00:09:27.251 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:27.251 "is_configured": true, 00:09:27.251 "data_offset": 0, 00:09:27.251 "data_size": 65536 00:09:27.251 } 00:09:27.251 ] 00:09:27.251 }' 00:09:27.251 03:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.251 03:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.510 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.510 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.510 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.510 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.510 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.769 [2024-11-05 03:20:41.168979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.769 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.769 "name": "Existed_Raid", 00:09:27.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.769 "strip_size_kb": 64, 00:09:27.769 "state": "configuring", 00:09:27.769 "raid_level": "raid0", 00:09:27.769 "superblock": false, 00:09:27.769 "num_base_bdevs": 3, 00:09:27.770 "num_base_bdevs_discovered": 2, 00:09:27.770 "num_base_bdevs_operational": 3, 00:09:27.770 "base_bdevs_list": [ 00:09:27.770 { 00:09:27.770 "name": null, 00:09:27.770 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:27.770 "is_configured": false, 00:09:27.770 "data_offset": 0, 00:09:27.770 "data_size": 65536 00:09:27.770 }, 00:09:27.770 { 00:09:27.770 "name": "BaseBdev2", 00:09:27.770 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:27.770 "is_configured": true, 00:09:27.770 "data_offset": 0, 00:09:27.770 "data_size": 65536 00:09:27.770 }, 00:09:27.770 { 00:09:27.770 "name": "BaseBdev3", 00:09:27.770 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:27.770 "is_configured": true, 00:09:27.770 "data_offset": 0, 00:09:27.770 "data_size": 65536 00:09:27.770 } 00:09:27.770 ] 00:09:27.770 }' 00:09:27.770 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.770 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 [2024-11-05 03:20:41.877955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:28.339 [2024-11-05 03:20:41.877996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:28.339 [2024-11-05 03:20:41.878010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:28.339 [2024-11-05 03:20:41.878374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:28.339 [2024-11-05 03:20:41.878593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:28.339 [2024-11-05 03:20:41.878608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:28.339 [2024-11-05 03:20:41.879004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.339 NewBaseBdev 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 [ 00:09:28.339 { 00:09:28.339 "name": "NewBaseBdev", 00:09:28.339 "aliases": [ 00:09:28.339 "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5" 00:09:28.339 ], 00:09:28.339 "product_name": "Malloc disk", 00:09:28.339 "block_size": 512, 00:09:28.339 "num_blocks": 65536, 00:09:28.339 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:28.339 "assigned_rate_limits": { 00:09:28.339 "rw_ios_per_sec": 0, 00:09:28.339 "rw_mbytes_per_sec": 0, 00:09:28.339 "r_mbytes_per_sec": 0, 00:09:28.339 "w_mbytes_per_sec": 0 00:09:28.339 }, 00:09:28.339 "claimed": true, 00:09:28.339 "claim_type": "exclusive_write", 00:09:28.339 "zoned": false, 00:09:28.339 "supported_io_types": { 00:09:28.339 "read": true, 00:09:28.339 "write": true, 00:09:28.339 "unmap": true, 00:09:28.339 "flush": true, 00:09:28.339 "reset": true, 00:09:28.339 "nvme_admin": false, 00:09:28.339 "nvme_io": false, 00:09:28.339 "nvme_io_md": false, 00:09:28.339 "write_zeroes": true, 00:09:28.339 "zcopy": true, 00:09:28.339 "get_zone_info": false, 00:09:28.339 "zone_management": false, 00:09:28.339 "zone_append": false, 00:09:28.339 "compare": false, 00:09:28.339 "compare_and_write": false, 00:09:28.339 "abort": true, 00:09:28.339 "seek_hole": false, 00:09:28.339 "seek_data": false, 00:09:28.339 "copy": true, 00:09:28.339 "nvme_iov_md": false 00:09:28.339 }, 00:09:28.339 "memory_domains": [ 00:09:28.339 { 00:09:28.339 "dma_device_id": "system", 00:09:28.339 "dma_device_type": 1 00:09:28.339 }, 00:09:28.339 { 00:09:28.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.339 "dma_device_type": 2 00:09:28.339 } 00:09:28.339 ], 00:09:28.339 "driver_specific": {} 00:09:28.339 } 00:09:28.339 ] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.339 "name": "Existed_Raid", 00:09:28.339 "uuid": "2f462c50-4ad0-4155-9444-9b3a4da0bc8c", 00:09:28.339 "strip_size_kb": 64, 00:09:28.339 "state": "online", 00:09:28.339 "raid_level": "raid0", 00:09:28.339 "superblock": false, 00:09:28.339 "num_base_bdevs": 3, 00:09:28.339 "num_base_bdevs_discovered": 3, 00:09:28.339 "num_base_bdevs_operational": 3, 00:09:28.339 "base_bdevs_list": [ 00:09:28.339 { 00:09:28.339 "name": "NewBaseBdev", 00:09:28.339 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:28.339 "is_configured": true, 00:09:28.339 "data_offset": 0, 00:09:28.339 "data_size": 65536 00:09:28.339 }, 00:09:28.339 { 00:09:28.339 "name": "BaseBdev2", 00:09:28.339 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:28.339 "is_configured": true, 00:09:28.339 "data_offset": 0, 00:09:28.339 "data_size": 65536 00:09:28.339 }, 00:09:28.339 { 00:09:28.339 "name": "BaseBdev3", 00:09:28.339 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:28.339 "is_configured": true, 00:09:28.339 "data_offset": 0, 00:09:28.339 "data_size": 65536 00:09:28.339 } 00:09:28.339 ] 00:09:28.339 }' 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.339 03:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.908 [2024-11-05 03:20:42.462562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.908 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.908 "name": "Existed_Raid", 00:09:28.908 "aliases": [ 00:09:28.908 "2f462c50-4ad0-4155-9444-9b3a4da0bc8c" 00:09:28.908 ], 00:09:28.908 "product_name": "Raid Volume", 00:09:28.908 "block_size": 512, 00:09:28.908 "num_blocks": 196608, 00:09:28.908 "uuid": "2f462c50-4ad0-4155-9444-9b3a4da0bc8c", 00:09:28.908 "assigned_rate_limits": { 00:09:28.908 "rw_ios_per_sec": 0, 00:09:28.908 "rw_mbytes_per_sec": 0, 00:09:28.908 "r_mbytes_per_sec": 0, 00:09:28.908 "w_mbytes_per_sec": 0 00:09:28.908 }, 00:09:28.908 "claimed": false, 00:09:28.908 "zoned": false, 00:09:28.908 "supported_io_types": { 00:09:28.908 "read": true, 00:09:28.908 "write": true, 00:09:28.908 "unmap": true, 00:09:28.908 "flush": true, 00:09:28.908 "reset": true, 00:09:28.908 "nvme_admin": false, 00:09:28.908 "nvme_io": false, 00:09:28.908 "nvme_io_md": false, 00:09:28.908 "write_zeroes": true, 00:09:28.908 "zcopy": false, 00:09:28.908 "get_zone_info": false, 00:09:28.908 "zone_management": false, 00:09:28.908 "zone_append": false, 00:09:28.908 "compare": false, 00:09:28.908 "compare_and_write": false, 00:09:28.908 "abort": false, 00:09:28.908 "seek_hole": false, 00:09:28.908 "seek_data": false, 00:09:28.908 "copy": false, 00:09:28.908 "nvme_iov_md": false 00:09:28.908 }, 00:09:28.908 "memory_domains": [ 00:09:28.908 { 00:09:28.908 "dma_device_id": "system", 00:09:28.908 "dma_device_type": 1 00:09:28.908 }, 00:09:28.908 { 00:09:28.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.908 "dma_device_type": 2 00:09:28.908 }, 00:09:28.908 { 00:09:28.908 "dma_device_id": "system", 00:09:28.908 "dma_device_type": 1 00:09:28.908 }, 00:09:28.908 { 00:09:28.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.908 "dma_device_type": 2 00:09:28.908 }, 00:09:28.908 { 00:09:28.908 "dma_device_id": "system", 00:09:28.908 "dma_device_type": 1 00:09:28.908 }, 00:09:28.908 { 00:09:28.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.908 "dma_device_type": 2 00:09:28.908 } 00:09:28.908 ], 00:09:28.908 "driver_specific": { 00:09:28.908 "raid": { 00:09:28.908 "uuid": "2f462c50-4ad0-4155-9444-9b3a4da0bc8c", 00:09:28.908 "strip_size_kb": 64, 00:09:28.908 "state": "online", 00:09:28.908 "raid_level": "raid0", 00:09:28.908 "superblock": false, 00:09:28.908 "num_base_bdevs": 3, 00:09:28.908 "num_base_bdevs_discovered": 3, 00:09:28.908 "num_base_bdevs_operational": 3, 00:09:28.908 "base_bdevs_list": [ 00:09:28.908 { 00:09:28.908 "name": "NewBaseBdev", 00:09:28.908 "uuid": "0972f10e-3b52-44b0-83fd-6a5ca8f1a6f5", 00:09:28.908 "is_configured": true, 00:09:28.908 "data_offset": 0, 00:09:28.908 "data_size": 65536 00:09:28.908 }, 00:09:28.908 { 00:09:28.908 "name": "BaseBdev2", 00:09:28.908 "uuid": "5e40131d-3fc0-4009-9111-411b33d154dd", 00:09:28.908 "is_configured": true, 00:09:28.908 "data_offset": 0, 00:09:28.908 "data_size": 65536 00:09:28.908 }, 00:09:28.908 { 00:09:28.908 "name": "BaseBdev3", 00:09:28.909 "uuid": "2a9b6414-64b9-47bd-abe5-7e02992aea57", 00:09:28.909 "is_configured": true, 00:09:28.909 "data_offset": 0, 00:09:28.909 "data_size": 65536 00:09:28.909 } 00:09:28.909 ] 00:09:28.909 } 00:09:28.909 } 00:09:28.909 }' 00:09:28.909 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:29.168 BaseBdev2 00:09:29.168 BaseBdev3' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.168 [2024-11-05 03:20:42.790262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.168 [2024-11-05 03:20:42.790287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.168 [2024-11-05 03:20:42.790410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.168 [2024-11-05 03:20:42.790478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.168 [2024-11-05 03:20:42.790499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63580 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63580 ']' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63580 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:29.168 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63580 00:09:29.427 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:29.427 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:29.427 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63580' 00:09:29.427 killing process with pid 63580 00:09:29.427 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63580 00:09:29.427 [2024-11-05 03:20:42.831724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.427 03:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63580 00:09:29.687 [2024-11-05 03:20:43.070546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.624 03:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:30.624 00:09:30.624 real 0m12.030s 00:09:30.624 user 0m20.228s 00:09:30.625 sys 0m1.559s 00:09:30.625 ************************************ 00:09:30.625 END TEST raid_state_function_test 00:09:30.625 ************************************ 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.625 03:20:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:30.625 03:20:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:30.625 03:20:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.625 03:20:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.625 ************************************ 00:09:30.625 START TEST raid_state_function_test_sb 00:09:30.625 ************************************ 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:30.625 Process raid pid: 64218 00:09:30.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64218 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64218' 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64218 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64218 ']' 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:30.625 03:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.625 [2024-11-05 03:20:44.234768] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:30.625 [2024-11-05 03:20:44.235247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.884 [2024-11-05 03:20:44.423519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.143 [2024-11-05 03:20:44.550754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.143 [2024-11-05 03:20:44.747945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.143 [2024-11-05 03:20:44.748224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.712 [2024-11-05 03:20:45.274978] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.712 [2024-11-05 03:20:45.275055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.712 [2024-11-05 03:20:45.275072] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.712 [2024-11-05 03:20:45.275087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.712 [2024-11-05 03:20:45.275097] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.712 [2024-11-05 03:20:45.275110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.712 "name": "Existed_Raid", 00:09:31.712 "uuid": "808e65a5-1384-439b-8a28-52e8d8d30885", 00:09:31.712 "strip_size_kb": 64, 00:09:31.712 "state": "configuring", 00:09:31.712 "raid_level": "raid0", 00:09:31.712 "superblock": true, 00:09:31.712 "num_base_bdevs": 3, 00:09:31.712 "num_base_bdevs_discovered": 0, 00:09:31.712 "num_base_bdevs_operational": 3, 00:09:31.712 "base_bdevs_list": [ 00:09:31.712 { 00:09:31.712 "name": "BaseBdev1", 00:09:31.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.712 "is_configured": false, 00:09:31.712 "data_offset": 0, 00:09:31.712 "data_size": 0 00:09:31.712 }, 00:09:31.712 { 00:09:31.712 "name": "BaseBdev2", 00:09:31.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.712 "is_configured": false, 00:09:31.712 "data_offset": 0, 00:09:31.712 "data_size": 0 00:09:31.712 }, 00:09:31.712 { 00:09:31.712 "name": "BaseBdev3", 00:09:31.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.712 "is_configured": false, 00:09:31.712 "data_offset": 0, 00:09:31.712 "data_size": 0 00:09:31.712 } 00:09:31.712 ] 00:09:31.712 }' 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.712 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.283 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.284 [2024-11-05 03:20:45.843082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.284 [2024-11-05 03:20:45.843299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.284 [2024-11-05 03:20:45.851083] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.284 [2024-11-05 03:20:45.851149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.284 [2024-11-05 03:20:45.851163] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.284 [2024-11-05 03:20:45.851177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.284 [2024-11-05 03:20:45.851186] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.284 [2024-11-05 03:20:45.851199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.284 [2024-11-05 03:20:45.893478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.284 BaseBdev1 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.284 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.543 [ 00:09:32.543 { 00:09:32.543 "name": "BaseBdev1", 00:09:32.543 "aliases": [ 00:09:32.543 "916da3c5-a5d3-4c07-b48d-3390645095dc" 00:09:32.543 ], 00:09:32.543 "product_name": "Malloc disk", 00:09:32.543 "block_size": 512, 00:09:32.543 "num_blocks": 65536, 00:09:32.543 "uuid": "916da3c5-a5d3-4c07-b48d-3390645095dc", 00:09:32.543 "assigned_rate_limits": { 00:09:32.543 "rw_ios_per_sec": 0, 00:09:32.543 "rw_mbytes_per_sec": 0, 00:09:32.543 "r_mbytes_per_sec": 0, 00:09:32.543 "w_mbytes_per_sec": 0 00:09:32.543 }, 00:09:32.543 "claimed": true, 00:09:32.543 "claim_type": "exclusive_write", 00:09:32.543 "zoned": false, 00:09:32.543 "supported_io_types": { 00:09:32.543 "read": true, 00:09:32.543 "write": true, 00:09:32.543 "unmap": true, 00:09:32.543 "flush": true, 00:09:32.543 "reset": true, 00:09:32.543 "nvme_admin": false, 00:09:32.543 "nvme_io": false, 00:09:32.543 "nvme_io_md": false, 00:09:32.543 "write_zeroes": true, 00:09:32.543 "zcopy": true, 00:09:32.543 "get_zone_info": false, 00:09:32.543 "zone_management": false, 00:09:32.543 "zone_append": false, 00:09:32.543 "compare": false, 00:09:32.543 "compare_and_write": false, 00:09:32.543 "abort": true, 00:09:32.543 "seek_hole": false, 00:09:32.543 "seek_data": false, 00:09:32.543 "copy": true, 00:09:32.543 "nvme_iov_md": false 00:09:32.543 }, 00:09:32.543 "memory_domains": [ 00:09:32.543 { 00:09:32.543 "dma_device_id": "system", 00:09:32.543 "dma_device_type": 1 00:09:32.543 }, 00:09:32.543 { 00:09:32.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.543 "dma_device_type": 2 00:09:32.543 } 00:09:32.543 ], 00:09:32.543 "driver_specific": {} 00:09:32.543 } 00:09:32.543 ] 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.543 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.543 "name": "Existed_Raid", 00:09:32.543 "uuid": "afd6c613-b8e4-44cb-a159-b689c50554c3", 00:09:32.543 "strip_size_kb": 64, 00:09:32.543 "state": "configuring", 00:09:32.543 "raid_level": "raid0", 00:09:32.543 "superblock": true, 00:09:32.543 "num_base_bdevs": 3, 00:09:32.543 "num_base_bdevs_discovered": 1, 00:09:32.543 "num_base_bdevs_operational": 3, 00:09:32.543 "base_bdevs_list": [ 00:09:32.543 { 00:09:32.543 "name": "BaseBdev1", 00:09:32.543 "uuid": "916da3c5-a5d3-4c07-b48d-3390645095dc", 00:09:32.543 "is_configured": true, 00:09:32.543 "data_offset": 2048, 00:09:32.543 "data_size": 63488 00:09:32.543 }, 00:09:32.543 { 00:09:32.543 "name": "BaseBdev2", 00:09:32.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.543 "is_configured": false, 00:09:32.543 "data_offset": 0, 00:09:32.543 "data_size": 0 00:09:32.543 }, 00:09:32.543 { 00:09:32.543 "name": "BaseBdev3", 00:09:32.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.544 "is_configured": false, 00:09:32.544 "data_offset": 0, 00:09:32.544 "data_size": 0 00:09:32.544 } 00:09:32.544 ] 00:09:32.544 }' 00:09:32.544 03:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.544 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 [2024-11-05 03:20:46.478151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.111 [2024-11-05 03:20:46.478244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 [2024-11-05 03:20:46.486260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.111 [2024-11-05 03:20:46.489001] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.111 [2024-11-05 03:20:46.489092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.111 [2024-11-05 03:20:46.489118] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.111 [2024-11-05 03:20:46.489133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.111 "name": "Existed_Raid", 00:09:33.111 "uuid": "96cdbf7e-c666-4b33-b92b-801da797b1bc", 00:09:33.111 "strip_size_kb": 64, 00:09:33.111 "state": "configuring", 00:09:33.111 "raid_level": "raid0", 00:09:33.111 "superblock": true, 00:09:33.111 "num_base_bdevs": 3, 00:09:33.111 "num_base_bdevs_discovered": 1, 00:09:33.111 "num_base_bdevs_operational": 3, 00:09:33.111 "base_bdevs_list": [ 00:09:33.111 { 00:09:33.111 "name": "BaseBdev1", 00:09:33.111 "uuid": "916da3c5-a5d3-4c07-b48d-3390645095dc", 00:09:33.111 "is_configured": true, 00:09:33.111 "data_offset": 2048, 00:09:33.111 "data_size": 63488 00:09:33.111 }, 00:09:33.111 { 00:09:33.111 "name": "BaseBdev2", 00:09:33.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.111 "is_configured": false, 00:09:33.111 "data_offset": 0, 00:09:33.111 "data_size": 0 00:09:33.111 }, 00:09:33.111 { 00:09:33.111 "name": "BaseBdev3", 00:09:33.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.111 "is_configured": false, 00:09:33.111 "data_offset": 0, 00:09:33.111 "data_size": 0 00:09:33.111 } 00:09:33.111 ] 00:09:33.111 }' 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.111 03:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.679 [2024-11-05 03:20:47.084246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.679 BaseBdev2 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.679 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.679 [ 00:09:33.679 { 00:09:33.679 "name": "BaseBdev2", 00:09:33.680 "aliases": [ 00:09:33.680 "43e9db72-73cf-4a8f-ad32-a308b911a947" 00:09:33.680 ], 00:09:33.680 "product_name": "Malloc disk", 00:09:33.680 "block_size": 512, 00:09:33.680 "num_blocks": 65536, 00:09:33.680 "uuid": "43e9db72-73cf-4a8f-ad32-a308b911a947", 00:09:33.680 "assigned_rate_limits": { 00:09:33.680 "rw_ios_per_sec": 0, 00:09:33.680 "rw_mbytes_per_sec": 0, 00:09:33.680 "r_mbytes_per_sec": 0, 00:09:33.680 "w_mbytes_per_sec": 0 00:09:33.680 }, 00:09:33.680 "claimed": true, 00:09:33.680 "claim_type": "exclusive_write", 00:09:33.680 "zoned": false, 00:09:33.680 "supported_io_types": { 00:09:33.680 "read": true, 00:09:33.680 "write": true, 00:09:33.680 "unmap": true, 00:09:33.680 "flush": true, 00:09:33.680 "reset": true, 00:09:33.680 "nvme_admin": false, 00:09:33.680 "nvme_io": false, 00:09:33.680 "nvme_io_md": false, 00:09:33.680 "write_zeroes": true, 00:09:33.680 "zcopy": true, 00:09:33.680 "get_zone_info": false, 00:09:33.680 "zone_management": false, 00:09:33.680 "zone_append": false, 00:09:33.680 "compare": false, 00:09:33.680 "compare_and_write": false, 00:09:33.680 "abort": true, 00:09:33.680 "seek_hole": false, 00:09:33.680 "seek_data": false, 00:09:33.680 "copy": true, 00:09:33.680 "nvme_iov_md": false 00:09:33.680 }, 00:09:33.680 "memory_domains": [ 00:09:33.680 { 00:09:33.680 "dma_device_id": "system", 00:09:33.680 "dma_device_type": 1 00:09:33.680 }, 00:09:33.680 { 00:09:33.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.680 "dma_device_type": 2 00:09:33.680 } 00:09:33.680 ], 00:09:33.680 "driver_specific": {} 00:09:33.680 } 00:09:33.680 ] 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.680 "name": "Existed_Raid", 00:09:33.680 "uuid": "96cdbf7e-c666-4b33-b92b-801da797b1bc", 00:09:33.680 "strip_size_kb": 64, 00:09:33.680 "state": "configuring", 00:09:33.680 "raid_level": "raid0", 00:09:33.680 "superblock": true, 00:09:33.680 "num_base_bdevs": 3, 00:09:33.680 "num_base_bdevs_discovered": 2, 00:09:33.680 "num_base_bdevs_operational": 3, 00:09:33.680 "base_bdevs_list": [ 00:09:33.680 { 00:09:33.680 "name": "BaseBdev1", 00:09:33.680 "uuid": "916da3c5-a5d3-4c07-b48d-3390645095dc", 00:09:33.680 "is_configured": true, 00:09:33.680 "data_offset": 2048, 00:09:33.680 "data_size": 63488 00:09:33.680 }, 00:09:33.680 { 00:09:33.680 "name": "BaseBdev2", 00:09:33.680 "uuid": "43e9db72-73cf-4a8f-ad32-a308b911a947", 00:09:33.680 "is_configured": true, 00:09:33.680 "data_offset": 2048, 00:09:33.680 "data_size": 63488 00:09:33.680 }, 00:09:33.680 { 00:09:33.680 "name": "BaseBdev3", 00:09:33.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.680 "is_configured": false, 00:09:33.680 "data_offset": 0, 00:09:33.680 "data_size": 0 00:09:33.680 } 00:09:33.680 ] 00:09:33.680 }' 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.680 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.248 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.248 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.248 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.248 [2024-11-05 03:20:47.734962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.248 [2024-11-05 03:20:47.735341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.248 [2024-11-05 03:20:47.735373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.248 [2024-11-05 03:20:47.735757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.248 [2024-11-05 03:20:47.735959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.248 [2024-11-05 03:20:47.735976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.249 [2024-11-05 03:20:47.736201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.249 BaseBdev3 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.249 [ 00:09:34.249 { 00:09:34.249 "name": "BaseBdev3", 00:09:34.249 "aliases": [ 00:09:34.249 "5dc92ee5-bddb-491e-a48d-387825298c7a" 00:09:34.249 ], 00:09:34.249 "product_name": "Malloc disk", 00:09:34.249 "block_size": 512, 00:09:34.249 "num_blocks": 65536, 00:09:34.249 "uuid": "5dc92ee5-bddb-491e-a48d-387825298c7a", 00:09:34.249 "assigned_rate_limits": { 00:09:34.249 "rw_ios_per_sec": 0, 00:09:34.249 "rw_mbytes_per_sec": 0, 00:09:34.249 "r_mbytes_per_sec": 0, 00:09:34.249 "w_mbytes_per_sec": 0 00:09:34.249 }, 00:09:34.249 "claimed": true, 00:09:34.249 "claim_type": "exclusive_write", 00:09:34.249 "zoned": false, 00:09:34.249 "supported_io_types": { 00:09:34.249 "read": true, 00:09:34.249 "write": true, 00:09:34.249 "unmap": true, 00:09:34.249 "flush": true, 00:09:34.249 "reset": true, 00:09:34.249 "nvme_admin": false, 00:09:34.249 "nvme_io": false, 00:09:34.249 "nvme_io_md": false, 00:09:34.249 "write_zeroes": true, 00:09:34.249 "zcopy": true, 00:09:34.249 "get_zone_info": false, 00:09:34.249 "zone_management": false, 00:09:34.249 "zone_append": false, 00:09:34.249 "compare": false, 00:09:34.249 "compare_and_write": false, 00:09:34.249 "abort": true, 00:09:34.249 "seek_hole": false, 00:09:34.249 "seek_data": false, 00:09:34.249 "copy": true, 00:09:34.249 "nvme_iov_md": false 00:09:34.249 }, 00:09:34.249 "memory_domains": [ 00:09:34.249 { 00:09:34.249 "dma_device_id": "system", 00:09:34.249 "dma_device_type": 1 00:09:34.249 }, 00:09:34.249 { 00:09:34.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.249 "dma_device_type": 2 00:09:34.249 } 00:09:34.249 ], 00:09:34.249 "driver_specific": {} 00:09:34.249 } 00:09:34.249 ] 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.249 "name": "Existed_Raid", 00:09:34.249 "uuid": "96cdbf7e-c666-4b33-b92b-801da797b1bc", 00:09:34.249 "strip_size_kb": 64, 00:09:34.249 "state": "online", 00:09:34.249 "raid_level": "raid0", 00:09:34.249 "superblock": true, 00:09:34.249 "num_base_bdevs": 3, 00:09:34.249 "num_base_bdevs_discovered": 3, 00:09:34.249 "num_base_bdevs_operational": 3, 00:09:34.249 "base_bdevs_list": [ 00:09:34.249 { 00:09:34.249 "name": "BaseBdev1", 00:09:34.249 "uuid": "916da3c5-a5d3-4c07-b48d-3390645095dc", 00:09:34.249 "is_configured": true, 00:09:34.249 "data_offset": 2048, 00:09:34.249 "data_size": 63488 00:09:34.249 }, 00:09:34.249 { 00:09:34.249 "name": "BaseBdev2", 00:09:34.249 "uuid": "43e9db72-73cf-4a8f-ad32-a308b911a947", 00:09:34.249 "is_configured": true, 00:09:34.249 "data_offset": 2048, 00:09:34.249 "data_size": 63488 00:09:34.249 }, 00:09:34.249 { 00:09:34.249 "name": "BaseBdev3", 00:09:34.249 "uuid": "5dc92ee5-bddb-491e-a48d-387825298c7a", 00:09:34.249 "is_configured": true, 00:09:34.249 "data_offset": 2048, 00:09:34.249 "data_size": 63488 00:09:34.249 } 00:09:34.249 ] 00:09:34.249 }' 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.249 03:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.817 [2024-11-05 03:20:48.363632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.817 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.817 "name": "Existed_Raid", 00:09:34.817 "aliases": [ 00:09:34.817 "96cdbf7e-c666-4b33-b92b-801da797b1bc" 00:09:34.817 ], 00:09:34.817 "product_name": "Raid Volume", 00:09:34.817 "block_size": 512, 00:09:34.817 "num_blocks": 190464, 00:09:34.817 "uuid": "96cdbf7e-c666-4b33-b92b-801da797b1bc", 00:09:34.817 "assigned_rate_limits": { 00:09:34.817 "rw_ios_per_sec": 0, 00:09:34.817 "rw_mbytes_per_sec": 0, 00:09:34.817 "r_mbytes_per_sec": 0, 00:09:34.817 "w_mbytes_per_sec": 0 00:09:34.817 }, 00:09:34.817 "claimed": false, 00:09:34.817 "zoned": false, 00:09:34.817 "supported_io_types": { 00:09:34.817 "read": true, 00:09:34.817 "write": true, 00:09:34.817 "unmap": true, 00:09:34.817 "flush": true, 00:09:34.817 "reset": true, 00:09:34.817 "nvme_admin": false, 00:09:34.817 "nvme_io": false, 00:09:34.817 "nvme_io_md": false, 00:09:34.817 "write_zeroes": true, 00:09:34.817 "zcopy": false, 00:09:34.817 "get_zone_info": false, 00:09:34.817 "zone_management": false, 00:09:34.817 "zone_append": false, 00:09:34.817 "compare": false, 00:09:34.817 "compare_and_write": false, 00:09:34.817 "abort": false, 00:09:34.817 "seek_hole": false, 00:09:34.817 "seek_data": false, 00:09:34.817 "copy": false, 00:09:34.817 "nvme_iov_md": false 00:09:34.817 }, 00:09:34.817 "memory_domains": [ 00:09:34.817 { 00:09:34.817 "dma_device_id": "system", 00:09:34.817 "dma_device_type": 1 00:09:34.817 }, 00:09:34.817 { 00:09:34.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.817 "dma_device_type": 2 00:09:34.817 }, 00:09:34.817 { 00:09:34.817 "dma_device_id": "system", 00:09:34.817 "dma_device_type": 1 00:09:34.817 }, 00:09:34.817 { 00:09:34.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.818 "dma_device_type": 2 00:09:34.818 }, 00:09:34.818 { 00:09:34.818 "dma_device_id": "system", 00:09:34.818 "dma_device_type": 1 00:09:34.818 }, 00:09:34.818 { 00:09:34.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.818 "dma_device_type": 2 00:09:34.818 } 00:09:34.818 ], 00:09:34.818 "driver_specific": { 00:09:34.818 "raid": { 00:09:34.818 "uuid": "96cdbf7e-c666-4b33-b92b-801da797b1bc", 00:09:34.818 "strip_size_kb": 64, 00:09:34.818 "state": "online", 00:09:34.818 "raid_level": "raid0", 00:09:34.818 "superblock": true, 00:09:34.818 "num_base_bdevs": 3, 00:09:34.818 "num_base_bdevs_discovered": 3, 00:09:34.818 "num_base_bdevs_operational": 3, 00:09:34.818 "base_bdevs_list": [ 00:09:34.818 { 00:09:34.818 "name": "BaseBdev1", 00:09:34.818 "uuid": "916da3c5-a5d3-4c07-b48d-3390645095dc", 00:09:34.818 "is_configured": true, 00:09:34.818 "data_offset": 2048, 00:09:34.818 "data_size": 63488 00:09:34.818 }, 00:09:34.818 { 00:09:34.818 "name": "BaseBdev2", 00:09:34.818 "uuid": "43e9db72-73cf-4a8f-ad32-a308b911a947", 00:09:34.818 "is_configured": true, 00:09:34.818 "data_offset": 2048, 00:09:34.818 "data_size": 63488 00:09:34.818 }, 00:09:34.818 { 00:09:34.818 "name": "BaseBdev3", 00:09:34.818 "uuid": "5dc92ee5-bddb-491e-a48d-387825298c7a", 00:09:34.818 "is_configured": true, 00:09:34.818 "data_offset": 2048, 00:09:34.818 "data_size": 63488 00:09:34.818 } 00:09:34.818 ] 00:09:34.818 } 00:09:34.818 } 00:09:34.818 }' 00:09:34.818 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.077 BaseBdev2 00:09:35.077 BaseBdev3' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.077 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.078 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.078 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.078 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.078 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.078 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.078 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.078 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.078 [2024-11-05 03:20:48.695423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.078 [2024-11-05 03:20:48.695461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.078 [2024-11-05 03:20:48.695540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.337 "name": "Existed_Raid", 00:09:35.337 "uuid": "96cdbf7e-c666-4b33-b92b-801da797b1bc", 00:09:35.337 "strip_size_kb": 64, 00:09:35.337 "state": "offline", 00:09:35.337 "raid_level": "raid0", 00:09:35.337 "superblock": true, 00:09:35.337 "num_base_bdevs": 3, 00:09:35.337 "num_base_bdevs_discovered": 2, 00:09:35.337 "num_base_bdevs_operational": 2, 00:09:35.337 "base_bdevs_list": [ 00:09:35.337 { 00:09:35.337 "name": null, 00:09:35.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.337 "is_configured": false, 00:09:35.337 "data_offset": 0, 00:09:35.337 "data_size": 63488 00:09:35.337 }, 00:09:35.337 { 00:09:35.337 "name": "BaseBdev2", 00:09:35.337 "uuid": "43e9db72-73cf-4a8f-ad32-a308b911a947", 00:09:35.337 "is_configured": true, 00:09:35.337 "data_offset": 2048, 00:09:35.337 "data_size": 63488 00:09:35.337 }, 00:09:35.337 { 00:09:35.337 "name": "BaseBdev3", 00:09:35.337 "uuid": "5dc92ee5-bddb-491e-a48d-387825298c7a", 00:09:35.337 "is_configured": true, 00:09:35.337 "data_offset": 2048, 00:09:35.337 "data_size": 63488 00:09:35.337 } 00:09:35.337 ] 00:09:35.337 }' 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.337 03:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.905 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.905 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.906 [2024-11-05 03:20:49.408219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.906 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.166 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.167 [2024-11-05 03:20:49.557278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.167 [2024-11-05 03:20:49.557386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.167 BaseBdev2 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.167 [ 00:09:36.167 { 00:09:36.167 "name": "BaseBdev2", 00:09:36.167 "aliases": [ 00:09:36.167 "1e463450-9b07-4070-8fa5-9a7c247d25cd" 00:09:36.167 ], 00:09:36.167 "product_name": "Malloc disk", 00:09:36.167 "block_size": 512, 00:09:36.167 "num_blocks": 65536, 00:09:36.167 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:36.167 "assigned_rate_limits": { 00:09:36.167 "rw_ios_per_sec": 0, 00:09:36.167 "rw_mbytes_per_sec": 0, 00:09:36.167 "r_mbytes_per_sec": 0, 00:09:36.167 "w_mbytes_per_sec": 0 00:09:36.167 }, 00:09:36.167 "claimed": false, 00:09:36.167 "zoned": false, 00:09:36.167 "supported_io_types": { 00:09:36.167 "read": true, 00:09:36.167 "write": true, 00:09:36.167 "unmap": true, 00:09:36.167 "flush": true, 00:09:36.167 "reset": true, 00:09:36.167 "nvme_admin": false, 00:09:36.167 "nvme_io": false, 00:09:36.167 "nvme_io_md": false, 00:09:36.167 "write_zeroes": true, 00:09:36.167 "zcopy": true, 00:09:36.167 "get_zone_info": false, 00:09:36.167 "zone_management": false, 00:09:36.167 "zone_append": false, 00:09:36.167 "compare": false, 00:09:36.167 "compare_and_write": false, 00:09:36.167 "abort": true, 00:09:36.167 "seek_hole": false, 00:09:36.167 "seek_data": false, 00:09:36.167 "copy": true, 00:09:36.167 "nvme_iov_md": false 00:09:36.167 }, 00:09:36.167 "memory_domains": [ 00:09:36.167 { 00:09:36.167 "dma_device_id": "system", 00:09:36.167 "dma_device_type": 1 00:09:36.167 }, 00:09:36.167 { 00:09:36.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.167 "dma_device_type": 2 00:09:36.167 } 00:09:36.167 ], 00:09:36.167 "driver_specific": {} 00:09:36.167 } 00:09:36.167 ] 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.167 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.426 BaseBdev3 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.426 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.426 [ 00:09:36.426 { 00:09:36.426 "name": "BaseBdev3", 00:09:36.426 "aliases": [ 00:09:36.426 "0fefeef8-b592-4f2f-b320-3b9e7665b80e" 00:09:36.426 ], 00:09:36.426 "product_name": "Malloc disk", 00:09:36.426 "block_size": 512, 00:09:36.426 "num_blocks": 65536, 00:09:36.426 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:36.426 "assigned_rate_limits": { 00:09:36.426 "rw_ios_per_sec": 0, 00:09:36.426 "rw_mbytes_per_sec": 0, 00:09:36.426 "r_mbytes_per_sec": 0, 00:09:36.426 "w_mbytes_per_sec": 0 00:09:36.426 }, 00:09:36.426 "claimed": false, 00:09:36.426 "zoned": false, 00:09:36.426 "supported_io_types": { 00:09:36.426 "read": true, 00:09:36.426 "write": true, 00:09:36.426 "unmap": true, 00:09:36.426 "flush": true, 00:09:36.426 "reset": true, 00:09:36.426 "nvme_admin": false, 00:09:36.426 "nvme_io": false, 00:09:36.426 "nvme_io_md": false, 00:09:36.426 "write_zeroes": true, 00:09:36.426 "zcopy": true, 00:09:36.426 "get_zone_info": false, 00:09:36.426 "zone_management": false, 00:09:36.426 "zone_append": false, 00:09:36.426 "compare": false, 00:09:36.426 "compare_and_write": false, 00:09:36.426 "abort": true, 00:09:36.426 "seek_hole": false, 00:09:36.426 "seek_data": false, 00:09:36.426 "copy": true, 00:09:36.426 "nvme_iov_md": false 00:09:36.426 }, 00:09:36.426 "memory_domains": [ 00:09:36.426 { 00:09:36.426 "dma_device_id": "system", 00:09:36.426 "dma_device_type": 1 00:09:36.426 }, 00:09:36.426 { 00:09:36.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.426 "dma_device_type": 2 00:09:36.426 } 00:09:36.427 ], 00:09:36.427 "driver_specific": {} 00:09:36.427 } 00:09:36.427 ] 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.427 [2024-11-05 03:20:49.871846] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.427 [2024-11-05 03:20:49.871917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.427 [2024-11-05 03:20:49.871985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.427 [2024-11-05 03:20:49.874583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.427 "name": "Existed_Raid", 00:09:36.427 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:36.427 "strip_size_kb": 64, 00:09:36.427 "state": "configuring", 00:09:36.427 "raid_level": "raid0", 00:09:36.427 "superblock": true, 00:09:36.427 "num_base_bdevs": 3, 00:09:36.427 "num_base_bdevs_discovered": 2, 00:09:36.427 "num_base_bdevs_operational": 3, 00:09:36.427 "base_bdevs_list": [ 00:09:36.427 { 00:09:36.427 "name": "BaseBdev1", 00:09:36.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.427 "is_configured": false, 00:09:36.427 "data_offset": 0, 00:09:36.427 "data_size": 0 00:09:36.427 }, 00:09:36.427 { 00:09:36.427 "name": "BaseBdev2", 00:09:36.427 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:36.427 "is_configured": true, 00:09:36.427 "data_offset": 2048, 00:09:36.427 "data_size": 63488 00:09:36.427 }, 00:09:36.427 { 00:09:36.427 "name": "BaseBdev3", 00:09:36.427 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:36.427 "is_configured": true, 00:09:36.427 "data_offset": 2048, 00:09:36.427 "data_size": 63488 00:09:36.427 } 00:09:36.427 ] 00:09:36.427 }' 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.427 03:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.995 [2024-11-05 03:20:50.416028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.995 "name": "Existed_Raid", 00:09:36.995 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:36.995 "strip_size_kb": 64, 00:09:36.995 "state": "configuring", 00:09:36.995 "raid_level": "raid0", 00:09:36.995 "superblock": true, 00:09:36.995 "num_base_bdevs": 3, 00:09:36.995 "num_base_bdevs_discovered": 1, 00:09:36.995 "num_base_bdevs_operational": 3, 00:09:36.995 "base_bdevs_list": [ 00:09:36.995 { 00:09:36.995 "name": "BaseBdev1", 00:09:36.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.995 "is_configured": false, 00:09:36.995 "data_offset": 0, 00:09:36.995 "data_size": 0 00:09:36.995 }, 00:09:36.995 { 00:09:36.995 "name": null, 00:09:36.995 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:36.995 "is_configured": false, 00:09:36.995 "data_offset": 0, 00:09:36.995 "data_size": 63488 00:09:36.995 }, 00:09:36.995 { 00:09:36.995 "name": "BaseBdev3", 00:09:36.995 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:36.995 "is_configured": true, 00:09:36.995 "data_offset": 2048, 00:09:36.995 "data_size": 63488 00:09:36.995 } 00:09:36.995 ] 00:09:36.995 }' 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.995 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.564 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.564 03:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 03:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 [2024-11-05 03:20:51.038791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.564 BaseBdev1 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 [ 00:09:37.564 { 00:09:37.564 "name": "BaseBdev1", 00:09:37.564 "aliases": [ 00:09:37.564 "fc4e31a9-5e0e-47df-a8c4-aa84398323b5" 00:09:37.564 ], 00:09:37.564 "product_name": "Malloc disk", 00:09:37.564 "block_size": 512, 00:09:37.564 "num_blocks": 65536, 00:09:37.564 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:37.564 "assigned_rate_limits": { 00:09:37.564 "rw_ios_per_sec": 0, 00:09:37.564 "rw_mbytes_per_sec": 0, 00:09:37.564 "r_mbytes_per_sec": 0, 00:09:37.564 "w_mbytes_per_sec": 0 00:09:37.564 }, 00:09:37.564 "claimed": true, 00:09:37.564 "claim_type": "exclusive_write", 00:09:37.564 "zoned": false, 00:09:37.564 "supported_io_types": { 00:09:37.564 "read": true, 00:09:37.564 "write": true, 00:09:37.564 "unmap": true, 00:09:37.564 "flush": true, 00:09:37.564 "reset": true, 00:09:37.564 "nvme_admin": false, 00:09:37.564 "nvme_io": false, 00:09:37.564 "nvme_io_md": false, 00:09:37.564 "write_zeroes": true, 00:09:37.564 "zcopy": true, 00:09:37.564 "get_zone_info": false, 00:09:37.564 "zone_management": false, 00:09:37.564 "zone_append": false, 00:09:37.564 "compare": false, 00:09:37.564 "compare_and_write": false, 00:09:37.564 "abort": true, 00:09:37.564 "seek_hole": false, 00:09:37.564 "seek_data": false, 00:09:37.564 "copy": true, 00:09:37.564 "nvme_iov_md": false 00:09:37.564 }, 00:09:37.564 "memory_domains": [ 00:09:37.564 { 00:09:37.564 "dma_device_id": "system", 00:09:37.564 "dma_device_type": 1 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.564 "dma_device_type": 2 00:09:37.564 } 00:09:37.564 ], 00:09:37.564 "driver_specific": {} 00:09:37.564 } 00:09:37.564 ] 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.564 "name": "Existed_Raid", 00:09:37.564 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:37.564 "strip_size_kb": 64, 00:09:37.564 "state": "configuring", 00:09:37.564 "raid_level": "raid0", 00:09:37.564 "superblock": true, 00:09:37.564 "num_base_bdevs": 3, 00:09:37.564 "num_base_bdevs_discovered": 2, 00:09:37.564 "num_base_bdevs_operational": 3, 00:09:37.564 "base_bdevs_list": [ 00:09:37.564 { 00:09:37.564 "name": "BaseBdev1", 00:09:37.564 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:37.564 "is_configured": true, 00:09:37.564 "data_offset": 2048, 00:09:37.564 "data_size": 63488 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "name": null, 00:09:37.564 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:37.564 "is_configured": false, 00:09:37.564 "data_offset": 0, 00:09:37.564 "data_size": 63488 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "name": "BaseBdev3", 00:09:37.564 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:37.564 "is_configured": true, 00:09:37.564 "data_offset": 2048, 00:09:37.564 "data_size": 63488 00:09:37.564 } 00:09:37.564 ] 00:09:37.564 }' 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.564 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.132 [2024-11-05 03:20:51.675011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.132 "name": "Existed_Raid", 00:09:38.132 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:38.132 "strip_size_kb": 64, 00:09:38.132 "state": "configuring", 00:09:38.132 "raid_level": "raid0", 00:09:38.132 "superblock": true, 00:09:38.132 "num_base_bdevs": 3, 00:09:38.132 "num_base_bdevs_discovered": 1, 00:09:38.132 "num_base_bdevs_operational": 3, 00:09:38.132 "base_bdevs_list": [ 00:09:38.132 { 00:09:38.132 "name": "BaseBdev1", 00:09:38.132 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:38.132 "is_configured": true, 00:09:38.132 "data_offset": 2048, 00:09:38.132 "data_size": 63488 00:09:38.132 }, 00:09:38.132 { 00:09:38.132 "name": null, 00:09:38.132 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:38.132 "is_configured": false, 00:09:38.132 "data_offset": 0, 00:09:38.132 "data_size": 63488 00:09:38.132 }, 00:09:38.132 { 00:09:38.132 "name": null, 00:09:38.132 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:38.132 "is_configured": false, 00:09:38.132 "data_offset": 0, 00:09:38.132 "data_size": 63488 00:09:38.132 } 00:09:38.132 ] 00:09:38.132 }' 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.132 03:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.723 [2024-11-05 03:20:52.267202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.723 "name": "Existed_Raid", 00:09:38.723 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:38.723 "strip_size_kb": 64, 00:09:38.723 "state": "configuring", 00:09:38.723 "raid_level": "raid0", 00:09:38.723 "superblock": true, 00:09:38.723 "num_base_bdevs": 3, 00:09:38.723 "num_base_bdevs_discovered": 2, 00:09:38.723 "num_base_bdevs_operational": 3, 00:09:38.723 "base_bdevs_list": [ 00:09:38.723 { 00:09:38.723 "name": "BaseBdev1", 00:09:38.723 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:38.723 "is_configured": true, 00:09:38.723 "data_offset": 2048, 00:09:38.723 "data_size": 63488 00:09:38.723 }, 00:09:38.723 { 00:09:38.723 "name": null, 00:09:38.723 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:38.723 "is_configured": false, 00:09:38.723 "data_offset": 0, 00:09:38.723 "data_size": 63488 00:09:38.723 }, 00:09:38.723 { 00:09:38.723 "name": "BaseBdev3", 00:09:38.723 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:38.723 "is_configured": true, 00:09:38.723 "data_offset": 2048, 00:09:38.723 "data_size": 63488 00:09:38.723 } 00:09:38.723 ] 00:09:38.723 }' 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.723 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.290 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.290 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.290 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.290 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.290 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.291 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.291 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.291 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.291 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.291 [2024-11-05 03:20:52.859433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.549 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.549 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:39.549 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.549 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.549 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.549 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.550 "name": "Existed_Raid", 00:09:39.550 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:39.550 "strip_size_kb": 64, 00:09:39.550 "state": "configuring", 00:09:39.550 "raid_level": "raid0", 00:09:39.550 "superblock": true, 00:09:39.550 "num_base_bdevs": 3, 00:09:39.550 "num_base_bdevs_discovered": 1, 00:09:39.550 "num_base_bdevs_operational": 3, 00:09:39.550 "base_bdevs_list": [ 00:09:39.550 { 00:09:39.550 "name": null, 00:09:39.550 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:39.550 "is_configured": false, 00:09:39.550 "data_offset": 0, 00:09:39.550 "data_size": 63488 00:09:39.550 }, 00:09:39.550 { 00:09:39.550 "name": null, 00:09:39.550 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:39.550 "is_configured": false, 00:09:39.550 "data_offset": 0, 00:09:39.550 "data_size": 63488 00:09:39.550 }, 00:09:39.550 { 00:09:39.550 "name": "BaseBdev3", 00:09:39.550 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:39.550 "is_configured": true, 00:09:39.550 "data_offset": 2048, 00:09:39.550 "data_size": 63488 00:09:39.550 } 00:09:39.550 ] 00:09:39.550 }' 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.550 03:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.808 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.808 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.808 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.808 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.068 [2024-11-05 03:20:53.497636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.068 "name": "Existed_Raid", 00:09:40.068 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:40.068 "strip_size_kb": 64, 00:09:40.068 "state": "configuring", 00:09:40.068 "raid_level": "raid0", 00:09:40.068 "superblock": true, 00:09:40.068 "num_base_bdevs": 3, 00:09:40.068 "num_base_bdevs_discovered": 2, 00:09:40.068 "num_base_bdevs_operational": 3, 00:09:40.068 "base_bdevs_list": [ 00:09:40.068 { 00:09:40.068 "name": null, 00:09:40.068 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:40.068 "is_configured": false, 00:09:40.068 "data_offset": 0, 00:09:40.068 "data_size": 63488 00:09:40.068 }, 00:09:40.068 { 00:09:40.068 "name": "BaseBdev2", 00:09:40.068 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:40.068 "is_configured": true, 00:09:40.068 "data_offset": 2048, 00:09:40.068 "data_size": 63488 00:09:40.068 }, 00:09:40.068 { 00:09:40.068 "name": "BaseBdev3", 00:09:40.068 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:40.068 "is_configured": true, 00:09:40.068 "data_offset": 2048, 00:09:40.068 "data_size": 63488 00:09:40.068 } 00:09:40.068 ] 00:09:40.068 }' 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.068 03:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.636 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fc4e31a9-5e0e-47df-a8c4-aa84398323b5 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 [2024-11-05 03:20:54.161942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.637 [2024-11-05 03:20:54.162225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.637 [2024-11-05 03:20:54.162247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.637 [2024-11-05 03:20:54.162606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.637 NewBaseBdev 00:09:40.637 [2024-11-05 03:20:54.162798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.637 [2024-11-05 03:20:54.162814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.637 [2024-11-05 03:20:54.162974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 [ 00:09:40.637 { 00:09:40.637 "name": "NewBaseBdev", 00:09:40.637 "aliases": [ 00:09:40.637 "fc4e31a9-5e0e-47df-a8c4-aa84398323b5" 00:09:40.637 ], 00:09:40.637 "product_name": "Malloc disk", 00:09:40.637 "block_size": 512, 00:09:40.637 "num_blocks": 65536, 00:09:40.637 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:40.637 "assigned_rate_limits": { 00:09:40.637 "rw_ios_per_sec": 0, 00:09:40.637 "rw_mbytes_per_sec": 0, 00:09:40.637 "r_mbytes_per_sec": 0, 00:09:40.637 "w_mbytes_per_sec": 0 00:09:40.637 }, 00:09:40.637 "claimed": true, 00:09:40.637 "claim_type": "exclusive_write", 00:09:40.637 "zoned": false, 00:09:40.637 "supported_io_types": { 00:09:40.637 "read": true, 00:09:40.637 "write": true, 00:09:40.637 "unmap": true, 00:09:40.637 "flush": true, 00:09:40.637 "reset": true, 00:09:40.637 "nvme_admin": false, 00:09:40.637 "nvme_io": false, 00:09:40.637 "nvme_io_md": false, 00:09:40.637 "write_zeroes": true, 00:09:40.637 "zcopy": true, 00:09:40.637 "get_zone_info": false, 00:09:40.637 "zone_management": false, 00:09:40.637 "zone_append": false, 00:09:40.637 "compare": false, 00:09:40.637 "compare_and_write": false, 00:09:40.637 "abort": true, 00:09:40.637 "seek_hole": false, 00:09:40.637 "seek_data": false, 00:09:40.637 "copy": true, 00:09:40.637 "nvme_iov_md": false 00:09:40.637 }, 00:09:40.637 "memory_domains": [ 00:09:40.637 { 00:09:40.637 "dma_device_id": "system", 00:09:40.637 "dma_device_type": 1 00:09:40.637 }, 00:09:40.637 { 00:09:40.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.637 "dma_device_type": 2 00:09:40.637 } 00:09:40.637 ], 00:09:40.637 "driver_specific": {} 00:09:40.637 } 00:09:40.637 ] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.637 "name": "Existed_Raid", 00:09:40.637 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:40.637 "strip_size_kb": 64, 00:09:40.637 "state": "online", 00:09:40.637 "raid_level": "raid0", 00:09:40.637 "superblock": true, 00:09:40.637 "num_base_bdevs": 3, 00:09:40.637 "num_base_bdevs_discovered": 3, 00:09:40.637 "num_base_bdevs_operational": 3, 00:09:40.637 "base_bdevs_list": [ 00:09:40.637 { 00:09:40.637 "name": "NewBaseBdev", 00:09:40.637 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:40.637 "is_configured": true, 00:09:40.637 "data_offset": 2048, 00:09:40.637 "data_size": 63488 00:09:40.637 }, 00:09:40.637 { 00:09:40.637 "name": "BaseBdev2", 00:09:40.637 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:40.637 "is_configured": true, 00:09:40.637 "data_offset": 2048, 00:09:40.637 "data_size": 63488 00:09:40.637 }, 00:09:40.637 { 00:09:40.637 "name": "BaseBdev3", 00:09:40.637 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:40.637 "is_configured": true, 00:09:40.637 "data_offset": 2048, 00:09:40.637 "data_size": 63488 00:09:40.637 } 00:09:40.637 ] 00:09:40.637 }' 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.637 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.207 [2024-11-05 03:20:54.730599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.207 "name": "Existed_Raid", 00:09:41.207 "aliases": [ 00:09:41.207 "4ec2a5c5-0844-4738-91b4-6c24d8910380" 00:09:41.207 ], 00:09:41.207 "product_name": "Raid Volume", 00:09:41.207 "block_size": 512, 00:09:41.207 "num_blocks": 190464, 00:09:41.207 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:41.207 "assigned_rate_limits": { 00:09:41.207 "rw_ios_per_sec": 0, 00:09:41.207 "rw_mbytes_per_sec": 0, 00:09:41.207 "r_mbytes_per_sec": 0, 00:09:41.207 "w_mbytes_per_sec": 0 00:09:41.207 }, 00:09:41.207 "claimed": false, 00:09:41.207 "zoned": false, 00:09:41.207 "supported_io_types": { 00:09:41.207 "read": true, 00:09:41.207 "write": true, 00:09:41.207 "unmap": true, 00:09:41.207 "flush": true, 00:09:41.207 "reset": true, 00:09:41.207 "nvme_admin": false, 00:09:41.207 "nvme_io": false, 00:09:41.207 "nvme_io_md": false, 00:09:41.207 "write_zeroes": true, 00:09:41.207 "zcopy": false, 00:09:41.207 "get_zone_info": false, 00:09:41.207 "zone_management": false, 00:09:41.207 "zone_append": false, 00:09:41.207 "compare": false, 00:09:41.207 "compare_and_write": false, 00:09:41.207 "abort": false, 00:09:41.207 "seek_hole": false, 00:09:41.207 "seek_data": false, 00:09:41.207 "copy": false, 00:09:41.207 "nvme_iov_md": false 00:09:41.207 }, 00:09:41.207 "memory_domains": [ 00:09:41.207 { 00:09:41.207 "dma_device_id": "system", 00:09:41.207 "dma_device_type": 1 00:09:41.207 }, 00:09:41.207 { 00:09:41.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.207 "dma_device_type": 2 00:09:41.207 }, 00:09:41.207 { 00:09:41.207 "dma_device_id": "system", 00:09:41.207 "dma_device_type": 1 00:09:41.207 }, 00:09:41.207 { 00:09:41.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.207 "dma_device_type": 2 00:09:41.207 }, 00:09:41.207 { 00:09:41.207 "dma_device_id": "system", 00:09:41.207 "dma_device_type": 1 00:09:41.207 }, 00:09:41.207 { 00:09:41.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.207 "dma_device_type": 2 00:09:41.207 } 00:09:41.207 ], 00:09:41.207 "driver_specific": { 00:09:41.207 "raid": { 00:09:41.207 "uuid": "4ec2a5c5-0844-4738-91b4-6c24d8910380", 00:09:41.207 "strip_size_kb": 64, 00:09:41.207 "state": "online", 00:09:41.207 "raid_level": "raid0", 00:09:41.207 "superblock": true, 00:09:41.207 "num_base_bdevs": 3, 00:09:41.207 "num_base_bdevs_discovered": 3, 00:09:41.207 "num_base_bdevs_operational": 3, 00:09:41.207 "base_bdevs_list": [ 00:09:41.207 { 00:09:41.207 "name": "NewBaseBdev", 00:09:41.207 "uuid": "fc4e31a9-5e0e-47df-a8c4-aa84398323b5", 00:09:41.207 "is_configured": true, 00:09:41.207 "data_offset": 2048, 00:09:41.207 "data_size": 63488 00:09:41.207 }, 00:09:41.207 { 00:09:41.207 "name": "BaseBdev2", 00:09:41.207 "uuid": "1e463450-9b07-4070-8fa5-9a7c247d25cd", 00:09:41.207 "is_configured": true, 00:09:41.207 "data_offset": 2048, 00:09:41.207 "data_size": 63488 00:09:41.207 }, 00:09:41.207 { 00:09:41.207 "name": "BaseBdev3", 00:09:41.207 "uuid": "0fefeef8-b592-4f2f-b320-3b9e7665b80e", 00:09:41.207 "is_configured": true, 00:09:41.207 "data_offset": 2048, 00:09:41.207 "data_size": 63488 00:09:41.207 } 00:09:41.207 ] 00:09:41.207 } 00:09:41.207 } 00:09:41.207 }' 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.207 BaseBdev2 00:09:41.207 BaseBdev3' 00:09:41.207 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.467 03:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.467 [2024-11-05 03:20:55.022252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.467 [2024-11-05 03:20:55.022296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.467 [2024-11-05 03:20:55.022421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.467 [2024-11-05 03:20:55.022488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.467 [2024-11-05 03:20:55.022508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64218 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64218 ']' 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64218 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64218 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:41.467 killing process with pid 64218 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64218' 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64218 00:09:41.467 [2024-11-05 03:20:55.060272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.467 03:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64218 00:09:41.726 [2024-11-05 03:20:55.295965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.663 03:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.663 00:09:42.663 real 0m12.136s 00:09:42.663 user 0m20.345s 00:09:42.663 sys 0m1.604s 00:09:42.663 03:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.663 03:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.663 ************************************ 00:09:42.663 END TEST raid_state_function_test_sb 00:09:42.663 ************************************ 00:09:42.663 03:20:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:42.663 03:20:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:42.663 03:20:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.663 03:20:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.923 ************************************ 00:09:42.923 START TEST raid_superblock_test 00:09:42.923 ************************************ 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64849 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64849 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 64849 ']' 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.923 03:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.923 [2024-11-05 03:20:56.419501] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:42.923 [2024-11-05 03:20:56.419729] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64849 ] 00:09:43.181 [2024-11-05 03:20:56.606758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.181 [2024-11-05 03:20:56.730921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.440 [2024-11-05 03:20:56.927825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.440 [2024-11-05 03:20:56.927892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.009 malloc1 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.009 [2024-11-05 03:20:57.448581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.009 [2024-11-05 03:20:57.448661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.009 [2024-11-05 03:20:57.448704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:44.009 [2024-11-05 03:20:57.448720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.009 [2024-11-05 03:20:57.451604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.009 [2024-11-05 03:20:57.451651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.009 pt1 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.009 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.010 malloc2 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.010 [2024-11-05 03:20:57.505678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.010 [2024-11-05 03:20:57.505749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.010 [2024-11-05 03:20:57.505781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:44.010 [2024-11-05 03:20:57.505796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.010 [2024-11-05 03:20:57.508653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.010 [2024-11-05 03:20:57.508727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.010 pt2 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.010 malloc3 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.010 [2024-11-05 03:20:57.573256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.010 [2024-11-05 03:20:57.573368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.010 [2024-11-05 03:20:57.573401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:44.010 [2024-11-05 03:20:57.573416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.010 [2024-11-05 03:20:57.576142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.010 [2024-11-05 03:20:57.576204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.010 pt3 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.010 [2024-11-05 03:20:57.585298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.010 [2024-11-05 03:20:57.587776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.010 [2024-11-05 03:20:57.587886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.010 [2024-11-05 03:20:57.588116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:44.010 [2024-11-05 03:20:57.588150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.010 [2024-11-05 03:20:57.588472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:44.010 [2024-11-05 03:20:57.588699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:44.010 [2024-11-05 03:20:57.588732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:44.010 [2024-11-05 03:20:57.588919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.010 "name": "raid_bdev1", 00:09:44.010 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:44.010 "strip_size_kb": 64, 00:09:44.010 "state": "online", 00:09:44.010 "raid_level": "raid0", 00:09:44.010 "superblock": true, 00:09:44.010 "num_base_bdevs": 3, 00:09:44.010 "num_base_bdevs_discovered": 3, 00:09:44.010 "num_base_bdevs_operational": 3, 00:09:44.010 "base_bdevs_list": [ 00:09:44.010 { 00:09:44.010 "name": "pt1", 00:09:44.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.010 "is_configured": true, 00:09:44.010 "data_offset": 2048, 00:09:44.010 "data_size": 63488 00:09:44.010 }, 00:09:44.010 { 00:09:44.010 "name": "pt2", 00:09:44.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.010 "is_configured": true, 00:09:44.010 "data_offset": 2048, 00:09:44.010 "data_size": 63488 00:09:44.010 }, 00:09:44.010 { 00:09:44.010 "name": "pt3", 00:09:44.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.010 "is_configured": true, 00:09:44.010 "data_offset": 2048, 00:09:44.010 "data_size": 63488 00:09:44.010 } 00:09:44.010 ] 00:09:44.010 }' 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.010 03:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.579 [2024-11-05 03:20:58.097866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.579 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.579 "name": "raid_bdev1", 00:09:44.579 "aliases": [ 00:09:44.579 "a231b3c2-f765-4037-bba2-06bed6e598cf" 00:09:44.579 ], 00:09:44.579 "product_name": "Raid Volume", 00:09:44.579 "block_size": 512, 00:09:44.579 "num_blocks": 190464, 00:09:44.579 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:44.579 "assigned_rate_limits": { 00:09:44.579 "rw_ios_per_sec": 0, 00:09:44.579 "rw_mbytes_per_sec": 0, 00:09:44.579 "r_mbytes_per_sec": 0, 00:09:44.579 "w_mbytes_per_sec": 0 00:09:44.579 }, 00:09:44.579 "claimed": false, 00:09:44.579 "zoned": false, 00:09:44.579 "supported_io_types": { 00:09:44.579 "read": true, 00:09:44.579 "write": true, 00:09:44.579 "unmap": true, 00:09:44.579 "flush": true, 00:09:44.579 "reset": true, 00:09:44.579 "nvme_admin": false, 00:09:44.579 "nvme_io": false, 00:09:44.580 "nvme_io_md": false, 00:09:44.580 "write_zeroes": true, 00:09:44.580 "zcopy": false, 00:09:44.580 "get_zone_info": false, 00:09:44.580 "zone_management": false, 00:09:44.580 "zone_append": false, 00:09:44.580 "compare": false, 00:09:44.580 "compare_and_write": false, 00:09:44.580 "abort": false, 00:09:44.580 "seek_hole": false, 00:09:44.580 "seek_data": false, 00:09:44.580 "copy": false, 00:09:44.580 "nvme_iov_md": false 00:09:44.580 }, 00:09:44.580 "memory_domains": [ 00:09:44.580 { 00:09:44.580 "dma_device_id": "system", 00:09:44.580 "dma_device_type": 1 00:09:44.580 }, 00:09:44.580 { 00:09:44.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.580 "dma_device_type": 2 00:09:44.580 }, 00:09:44.580 { 00:09:44.580 "dma_device_id": "system", 00:09:44.580 "dma_device_type": 1 00:09:44.580 }, 00:09:44.580 { 00:09:44.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.580 "dma_device_type": 2 00:09:44.580 }, 00:09:44.580 { 00:09:44.580 "dma_device_id": "system", 00:09:44.580 "dma_device_type": 1 00:09:44.580 }, 00:09:44.580 { 00:09:44.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.580 "dma_device_type": 2 00:09:44.580 } 00:09:44.580 ], 00:09:44.580 "driver_specific": { 00:09:44.580 "raid": { 00:09:44.580 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:44.580 "strip_size_kb": 64, 00:09:44.580 "state": "online", 00:09:44.580 "raid_level": "raid0", 00:09:44.580 "superblock": true, 00:09:44.580 "num_base_bdevs": 3, 00:09:44.580 "num_base_bdevs_discovered": 3, 00:09:44.580 "num_base_bdevs_operational": 3, 00:09:44.580 "base_bdevs_list": [ 00:09:44.580 { 00:09:44.580 "name": "pt1", 00:09:44.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.580 "is_configured": true, 00:09:44.580 "data_offset": 2048, 00:09:44.580 "data_size": 63488 00:09:44.580 }, 00:09:44.580 { 00:09:44.580 "name": "pt2", 00:09:44.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.580 "is_configured": true, 00:09:44.580 "data_offset": 2048, 00:09:44.580 "data_size": 63488 00:09:44.580 }, 00:09:44.580 { 00:09:44.580 "name": "pt3", 00:09:44.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.580 "is_configured": true, 00:09:44.580 "data_offset": 2048, 00:09:44.580 "data_size": 63488 00:09:44.580 } 00:09:44.580 ] 00:09:44.580 } 00:09:44.580 } 00:09:44.580 }' 00:09:44.580 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.580 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.580 pt2 00:09:44.580 pt3' 00:09:44.580 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 [2024-11-05 03:20:58.413884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a231b3c2-f765-4037-bba2-06bed6e598cf 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a231b3c2-f765-4037-bba2-06bed6e598cf ']' 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.839 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 [2024-11-05 03:20:58.469495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.839 [2024-11-05 03:20:58.469529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.839 [2024-11-05 03:20:58.469612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.839 [2024-11-05 03:20:58.469705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.839 [2024-11-05 03:20:58.469720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:44.840 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:45.099 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 [2024-11-05 03:20:58.626176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:45.100 [2024-11-05 03:20:58.628789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:45.100 [2024-11-05 03:20:58.628876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:45.100 [2024-11-05 03:20:58.628945] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:45.100 [2024-11-05 03:20:58.629029] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:45.100 [2024-11-05 03:20:58.629091] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:45.100 [2024-11-05 03:20:58.629134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.100 [2024-11-05 03:20:58.629156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:45.100 request: 00:09:45.100 { 00:09:45.100 "name": "raid_bdev1", 00:09:45.100 "raid_level": "raid0", 00:09:45.100 "base_bdevs": [ 00:09:45.100 "malloc1", 00:09:45.100 "malloc2", 00:09:45.100 "malloc3" 00:09:45.100 ], 00:09:45.100 "strip_size_kb": 64, 00:09:45.100 "superblock": false, 00:09:45.100 "method": "bdev_raid_create", 00:09:45.100 "req_id": 1 00:09:45.100 } 00:09:45.100 Got JSON-RPC error response 00:09:45.100 response: 00:09:45.100 { 00:09:45.100 "code": -17, 00:09:45.100 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:45.100 } 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 [2024-11-05 03:20:58.694082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.100 [2024-11-05 03:20:58.694217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.100 [2024-11-05 03:20:58.694247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:45.100 [2024-11-05 03:20:58.694270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.100 [2024-11-05 03:20:58.697293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.100 [2024-11-05 03:20:58.697383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.100 [2024-11-05 03:20:58.697491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:45.100 [2024-11-05 03:20:58.697558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.100 pt1 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.100 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.360 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.360 "name": "raid_bdev1", 00:09:45.360 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:45.360 "strip_size_kb": 64, 00:09:45.360 "state": "configuring", 00:09:45.360 "raid_level": "raid0", 00:09:45.360 "superblock": true, 00:09:45.360 "num_base_bdevs": 3, 00:09:45.360 "num_base_bdevs_discovered": 1, 00:09:45.360 "num_base_bdevs_operational": 3, 00:09:45.360 "base_bdevs_list": [ 00:09:45.360 { 00:09:45.360 "name": "pt1", 00:09:45.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.360 "is_configured": true, 00:09:45.360 "data_offset": 2048, 00:09:45.360 "data_size": 63488 00:09:45.360 }, 00:09:45.360 { 00:09:45.360 "name": null, 00:09:45.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.360 "is_configured": false, 00:09:45.360 "data_offset": 2048, 00:09:45.360 "data_size": 63488 00:09:45.360 }, 00:09:45.360 { 00:09:45.360 "name": null, 00:09:45.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.360 "is_configured": false, 00:09:45.360 "data_offset": 2048, 00:09:45.360 "data_size": 63488 00:09:45.360 } 00:09:45.360 ] 00:09:45.360 }' 00:09:45.360 03:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.360 03:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.619 [2024-11-05 03:20:59.222228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.619 [2024-11-05 03:20:59.222347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.619 [2024-11-05 03:20:59.222381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:45.619 [2024-11-05 03:20:59.222396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.619 [2024-11-05 03:20:59.222940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.619 [2024-11-05 03:20:59.223039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.619 [2024-11-05 03:20:59.223173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.619 [2024-11-05 03:20:59.223204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.619 pt2 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.619 [2024-11-05 03:20:59.230252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.619 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.879 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.879 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.879 "name": "raid_bdev1", 00:09:45.879 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:45.879 "strip_size_kb": 64, 00:09:45.879 "state": "configuring", 00:09:45.879 "raid_level": "raid0", 00:09:45.879 "superblock": true, 00:09:45.879 "num_base_bdevs": 3, 00:09:45.879 "num_base_bdevs_discovered": 1, 00:09:45.879 "num_base_bdevs_operational": 3, 00:09:45.879 "base_bdevs_list": [ 00:09:45.879 { 00:09:45.879 "name": "pt1", 00:09:45.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.879 "is_configured": true, 00:09:45.879 "data_offset": 2048, 00:09:45.879 "data_size": 63488 00:09:45.879 }, 00:09:45.879 { 00:09:45.879 "name": null, 00:09:45.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.879 "is_configured": false, 00:09:45.879 "data_offset": 0, 00:09:45.879 "data_size": 63488 00:09:45.879 }, 00:09:45.879 { 00:09:45.879 "name": null, 00:09:45.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.879 "is_configured": false, 00:09:45.879 "data_offset": 2048, 00:09:45.879 "data_size": 63488 00:09:45.879 } 00:09:45.879 ] 00:09:45.879 }' 00:09:45.879 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.879 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.138 [2024-11-05 03:20:59.758414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.138 [2024-11-05 03:20:59.758497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.138 [2024-11-05 03:20:59.758524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:46.138 [2024-11-05 03:20:59.758550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.138 [2024-11-05 03:20:59.759151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.138 [2024-11-05 03:20:59.759205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.138 [2024-11-05 03:20:59.759302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.138 [2024-11-05 03:20:59.759358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.138 pt2 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.138 [2024-11-05 03:20:59.770449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.138 [2024-11-05 03:20:59.770507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.138 [2024-11-05 03:20:59.770528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:46.138 [2024-11-05 03:20:59.770545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.138 [2024-11-05 03:20:59.771088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.138 [2024-11-05 03:20:59.771180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.138 [2024-11-05 03:20:59.771259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:46.138 [2024-11-05 03:20:59.771293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.138 [2024-11-05 03:20:59.771582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.138 [2024-11-05 03:20:59.771603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:46.138 [2024-11-05 03:20:59.771935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:46.138 [2024-11-05 03:20:59.772122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.138 [2024-11-05 03:20:59.772136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:46.138 [2024-11-05 03:20:59.772345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.138 pt3 00:09:46.138 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.397 "name": "raid_bdev1", 00:09:46.397 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:46.397 "strip_size_kb": 64, 00:09:46.397 "state": "online", 00:09:46.397 "raid_level": "raid0", 00:09:46.397 "superblock": true, 00:09:46.397 "num_base_bdevs": 3, 00:09:46.397 "num_base_bdevs_discovered": 3, 00:09:46.397 "num_base_bdevs_operational": 3, 00:09:46.397 "base_bdevs_list": [ 00:09:46.397 { 00:09:46.397 "name": "pt1", 00:09:46.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.397 "is_configured": true, 00:09:46.397 "data_offset": 2048, 00:09:46.397 "data_size": 63488 00:09:46.397 }, 00:09:46.397 { 00:09:46.397 "name": "pt2", 00:09:46.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.397 "is_configured": true, 00:09:46.397 "data_offset": 2048, 00:09:46.397 "data_size": 63488 00:09:46.397 }, 00:09:46.397 { 00:09:46.397 "name": "pt3", 00:09:46.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.397 "is_configured": true, 00:09:46.397 "data_offset": 2048, 00:09:46.397 "data_size": 63488 00:09:46.397 } 00:09:46.397 ] 00:09:46.397 }' 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.397 03:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.655 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.655 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.655 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.655 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.655 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.655 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.914 [2024-11-05 03:21:00.302972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.914 "name": "raid_bdev1", 00:09:46.914 "aliases": [ 00:09:46.914 "a231b3c2-f765-4037-bba2-06bed6e598cf" 00:09:46.914 ], 00:09:46.914 "product_name": "Raid Volume", 00:09:46.914 "block_size": 512, 00:09:46.914 "num_blocks": 190464, 00:09:46.914 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:46.914 "assigned_rate_limits": { 00:09:46.914 "rw_ios_per_sec": 0, 00:09:46.914 "rw_mbytes_per_sec": 0, 00:09:46.914 "r_mbytes_per_sec": 0, 00:09:46.914 "w_mbytes_per_sec": 0 00:09:46.914 }, 00:09:46.914 "claimed": false, 00:09:46.914 "zoned": false, 00:09:46.914 "supported_io_types": { 00:09:46.914 "read": true, 00:09:46.914 "write": true, 00:09:46.914 "unmap": true, 00:09:46.914 "flush": true, 00:09:46.914 "reset": true, 00:09:46.914 "nvme_admin": false, 00:09:46.914 "nvme_io": false, 00:09:46.914 "nvme_io_md": false, 00:09:46.914 "write_zeroes": true, 00:09:46.914 "zcopy": false, 00:09:46.914 "get_zone_info": false, 00:09:46.914 "zone_management": false, 00:09:46.914 "zone_append": false, 00:09:46.914 "compare": false, 00:09:46.914 "compare_and_write": false, 00:09:46.914 "abort": false, 00:09:46.914 "seek_hole": false, 00:09:46.914 "seek_data": false, 00:09:46.914 "copy": false, 00:09:46.914 "nvme_iov_md": false 00:09:46.914 }, 00:09:46.914 "memory_domains": [ 00:09:46.914 { 00:09:46.914 "dma_device_id": "system", 00:09:46.914 "dma_device_type": 1 00:09:46.914 }, 00:09:46.914 { 00:09:46.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.914 "dma_device_type": 2 00:09:46.914 }, 00:09:46.914 { 00:09:46.914 "dma_device_id": "system", 00:09:46.914 "dma_device_type": 1 00:09:46.914 }, 00:09:46.914 { 00:09:46.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.914 "dma_device_type": 2 00:09:46.914 }, 00:09:46.914 { 00:09:46.914 "dma_device_id": "system", 00:09:46.914 "dma_device_type": 1 00:09:46.914 }, 00:09:46.914 { 00:09:46.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.914 "dma_device_type": 2 00:09:46.914 } 00:09:46.914 ], 00:09:46.914 "driver_specific": { 00:09:46.914 "raid": { 00:09:46.914 "uuid": "a231b3c2-f765-4037-bba2-06bed6e598cf", 00:09:46.914 "strip_size_kb": 64, 00:09:46.914 "state": "online", 00:09:46.914 "raid_level": "raid0", 00:09:46.914 "superblock": true, 00:09:46.914 "num_base_bdevs": 3, 00:09:46.914 "num_base_bdevs_discovered": 3, 00:09:46.914 "num_base_bdevs_operational": 3, 00:09:46.914 "base_bdevs_list": [ 00:09:46.914 { 00:09:46.914 "name": "pt1", 00:09:46.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.914 "is_configured": true, 00:09:46.914 "data_offset": 2048, 00:09:46.914 "data_size": 63488 00:09:46.914 }, 00:09:46.914 { 00:09:46.914 "name": "pt2", 00:09:46.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.914 "is_configured": true, 00:09:46.914 "data_offset": 2048, 00:09:46.914 "data_size": 63488 00:09:46.914 }, 00:09:46.914 { 00:09:46.914 "name": "pt3", 00:09:46.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.914 "is_configured": true, 00:09:46.914 "data_offset": 2048, 00:09:46.914 "data_size": 63488 00:09:46.914 } 00:09:46.914 ] 00:09:46.914 } 00:09:46.914 } 00:09:46.914 }' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:46.914 pt2 00:09:46.914 pt3' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.914 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.173 [2024-11-05 03:21:00.618959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a231b3c2-f765-4037-bba2-06bed6e598cf '!=' a231b3c2-f765-4037-bba2-06bed6e598cf ']' 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64849 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 64849 ']' 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 64849 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64849 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:47.173 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:47.173 killing process with pid 64849 00:09:47.174 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64849' 00:09:47.174 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 64849 00:09:47.174 [2024-11-05 03:21:00.702693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.174 03:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 64849 00:09:47.174 [2024-11-05 03:21:00.702805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.174 [2024-11-05 03:21:00.702879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.174 [2024-11-05 03:21:00.702902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.433 [2024-11-05 03:21:00.937918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.371 03:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:48.371 00:09:48.371 real 0m5.575s 00:09:48.371 user 0m8.486s 00:09:48.371 sys 0m0.815s 00:09:48.371 03:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.371 03:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.371 ************************************ 00:09:48.371 END TEST raid_superblock_test 00:09:48.371 ************************************ 00:09:48.371 03:21:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:48.371 03:21:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:48.371 03:21:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.371 03:21:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.371 ************************************ 00:09:48.371 START TEST raid_read_error_test 00:09:48.371 ************************************ 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lh8zOfoRWy 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65108 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65108 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65108 ']' 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.371 03:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.630 [2024-11-05 03:21:02.108651] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:48.630 [2024-11-05 03:21:02.108825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65108 ] 00:09:48.889 [2024-11-05 03:21:02.291197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.889 [2024-11-05 03:21:02.413050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.148 [2024-11-05 03:21:02.610718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.148 [2024-11-05 03:21:02.610854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.717 BaseBdev1_malloc 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.717 true 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.717 [2024-11-05 03:21:03.133452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.717 [2024-11-05 03:21:03.133549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.717 [2024-11-05 03:21:03.133579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.717 [2024-11-05 03:21:03.133598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.717 [2024-11-05 03:21:03.136499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.717 [2024-11-05 03:21:03.136565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.717 BaseBdev1 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.717 BaseBdev2_malloc 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.717 true 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.717 [2024-11-05 03:21:03.198543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.717 [2024-11-05 03:21:03.198643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.717 [2024-11-05 03:21:03.198669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.717 [2024-11-05 03:21:03.198689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.717 [2024-11-05 03:21:03.201540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.717 [2024-11-05 03:21:03.201600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.717 BaseBdev2 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.717 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.717 BaseBdev3_malloc 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.718 true 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.718 [2024-11-05 03:21:03.263726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.718 [2024-11-05 03:21:03.263846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.718 [2024-11-05 03:21:03.263875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.718 [2024-11-05 03:21:03.263903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.718 [2024-11-05 03:21:03.266795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.718 [2024-11-05 03:21:03.266873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.718 BaseBdev3 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.718 [2024-11-05 03:21:03.271813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.718 [2024-11-05 03:21:03.274241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.718 [2024-11-05 03:21:03.274422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.718 [2024-11-05 03:21:03.274687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:49.718 [2024-11-05 03:21:03.274718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.718 [2024-11-05 03:21:03.275031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:49.718 [2024-11-05 03:21:03.275246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:49.718 [2024-11-05 03:21:03.275277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:49.718 [2024-11-05 03:21:03.275479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.718 "name": "raid_bdev1", 00:09:49.718 "uuid": "4d2532bd-e30c-4813-8862-3cb2818ecad1", 00:09:49.718 "strip_size_kb": 64, 00:09:49.718 "state": "online", 00:09:49.718 "raid_level": "raid0", 00:09:49.718 "superblock": true, 00:09:49.718 "num_base_bdevs": 3, 00:09:49.718 "num_base_bdevs_discovered": 3, 00:09:49.718 "num_base_bdevs_operational": 3, 00:09:49.718 "base_bdevs_list": [ 00:09:49.718 { 00:09:49.718 "name": "BaseBdev1", 00:09:49.718 "uuid": "5736ed2c-56e4-5f25-8378-d566ddd08288", 00:09:49.718 "is_configured": true, 00:09:49.718 "data_offset": 2048, 00:09:49.718 "data_size": 63488 00:09:49.718 }, 00:09:49.718 { 00:09:49.718 "name": "BaseBdev2", 00:09:49.718 "uuid": "7eff4072-df73-5d15-ad81-7919d80970f0", 00:09:49.718 "is_configured": true, 00:09:49.718 "data_offset": 2048, 00:09:49.718 "data_size": 63488 00:09:49.718 }, 00:09:49.718 { 00:09:49.718 "name": "BaseBdev3", 00:09:49.718 "uuid": "e33b79ac-ce92-5962-a61c-efa434fee0c5", 00:09:49.718 "is_configured": true, 00:09:49.718 "data_offset": 2048, 00:09:49.718 "data_size": 63488 00:09:49.718 } 00:09:49.718 ] 00:09:49.718 }' 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.718 03:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.287 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.287 03:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.546 [2024-11-05 03:21:03.937509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.484 "name": "raid_bdev1", 00:09:51.484 "uuid": "4d2532bd-e30c-4813-8862-3cb2818ecad1", 00:09:51.484 "strip_size_kb": 64, 00:09:51.484 "state": "online", 00:09:51.484 "raid_level": "raid0", 00:09:51.484 "superblock": true, 00:09:51.484 "num_base_bdevs": 3, 00:09:51.484 "num_base_bdevs_discovered": 3, 00:09:51.484 "num_base_bdevs_operational": 3, 00:09:51.484 "base_bdevs_list": [ 00:09:51.484 { 00:09:51.484 "name": "BaseBdev1", 00:09:51.484 "uuid": "5736ed2c-56e4-5f25-8378-d566ddd08288", 00:09:51.484 "is_configured": true, 00:09:51.484 "data_offset": 2048, 00:09:51.484 "data_size": 63488 00:09:51.484 }, 00:09:51.484 { 00:09:51.484 "name": "BaseBdev2", 00:09:51.484 "uuid": "7eff4072-df73-5d15-ad81-7919d80970f0", 00:09:51.484 "is_configured": true, 00:09:51.484 "data_offset": 2048, 00:09:51.484 "data_size": 63488 00:09:51.484 }, 00:09:51.484 { 00:09:51.484 "name": "BaseBdev3", 00:09:51.484 "uuid": "e33b79ac-ce92-5962-a61c-efa434fee0c5", 00:09:51.484 "is_configured": true, 00:09:51.484 "data_offset": 2048, 00:09:51.484 "data_size": 63488 00:09:51.484 } 00:09:51.484 ] 00:09:51.484 }' 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.484 03:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.744 [2024-11-05 03:21:05.363498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.744 [2024-11-05 03:21:05.363540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.744 [2024-11-05 03:21:05.366982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.744 [2024-11-05 03:21:05.367079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.744 [2024-11-05 03:21:05.367130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.744 [2024-11-05 03:21:05.367145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:51.744 { 00:09:51.744 "results": [ 00:09:51.744 { 00:09:51.744 "job": "raid_bdev1", 00:09:51.744 "core_mask": "0x1", 00:09:51.744 "workload": "randrw", 00:09:51.744 "percentage": 50, 00:09:51.744 "status": "finished", 00:09:51.744 "queue_depth": 1, 00:09:51.744 "io_size": 131072, 00:09:51.744 "runtime": 1.423526, 00:09:51.744 "iops": 11332.423854569568, 00:09:51.744 "mibps": 1416.552981821196, 00:09:51.744 "io_failed": 1, 00:09:51.744 "io_timeout": 0, 00:09:51.744 "avg_latency_us": 123.29330621030861, 00:09:51.744 "min_latency_us": 37.00363636363636, 00:09:51.744 "max_latency_us": 1899.0545454545454 00:09:51.744 } 00:09:51.744 ], 00:09:51.744 "core_count": 1 00:09:51.744 } 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65108 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65108 ']' 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65108 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.744 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65108 00:09:52.003 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:52.003 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:52.003 killing process with pid 65108 00:09:52.003 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65108' 00:09:52.003 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65108 00:09:52.003 [2024-11-05 03:21:05.402394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.003 03:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65108 00:09:52.003 [2024-11-05 03:21:05.588775] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lh8zOfoRWy 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:53.382 00:09:53.382 real 0m4.653s 00:09:53.382 user 0m5.819s 00:09:53.382 sys 0m0.618s 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.382 ************************************ 00:09:53.382 END TEST raid_read_error_test 00:09:53.382 ************************************ 00:09:53.382 03:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.382 03:21:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:53.382 03:21:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:53.382 03:21:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.382 03:21:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.382 ************************************ 00:09:53.382 START TEST raid_write_error_test 00:09:53.382 ************************************ 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fwi0iys02B 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65259 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65259 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65259 ']' 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:53.382 03:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.382 [2024-11-05 03:21:06.771573] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:53.382 [2024-11-05 03:21:06.771790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65259 ] 00:09:53.382 [2024-11-05 03:21:06.953784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.642 [2024-11-05 03:21:07.068451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.642 [2024-11-05 03:21:07.261643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.642 [2024-11-05 03:21:07.261726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.211 BaseBdev1_malloc 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.211 true 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.211 [2024-11-05 03:21:07.780785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.211 [2024-11-05 03:21:07.780883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.211 [2024-11-05 03:21:07.780912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:54.211 [2024-11-05 03:21:07.780930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.211 [2024-11-05 03:21:07.783952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.211 [2024-11-05 03:21:07.784046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.211 BaseBdev1 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.211 BaseBdev2_malloc 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.211 true 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.211 [2024-11-05 03:21:07.840093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:54.211 [2024-11-05 03:21:07.840198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.211 [2024-11-05 03:21:07.840222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:54.211 [2024-11-05 03:21:07.840238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.211 [2024-11-05 03:21:07.843110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.211 [2024-11-05 03:21:07.843179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.211 BaseBdev2 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.211 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.471 BaseBdev3_malloc 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.471 true 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.471 [2024-11-05 03:21:07.907852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:54.471 [2024-11-05 03:21:07.907936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.471 [2024-11-05 03:21:07.907961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:54.471 [2024-11-05 03:21:07.907977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.471 [2024-11-05 03:21:07.910920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.471 [2024-11-05 03:21:07.910984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:54.471 BaseBdev3 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.471 [2024-11-05 03:21:07.915945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.471 [2024-11-05 03:21:07.918506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.471 [2024-11-05 03:21:07.918618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.471 [2024-11-05 03:21:07.918882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:54.471 [2024-11-05 03:21:07.918907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:54.471 [2024-11-05 03:21:07.919189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:54.471 [2024-11-05 03:21:07.919477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:54.471 [2024-11-05 03:21:07.919502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:54.471 [2024-11-05 03:21:07.919683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.471 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.471 "name": "raid_bdev1", 00:09:54.471 "uuid": "a51c999a-4f11-40e7-9092-59792e70c9de", 00:09:54.471 "strip_size_kb": 64, 00:09:54.471 "state": "online", 00:09:54.471 "raid_level": "raid0", 00:09:54.471 "superblock": true, 00:09:54.471 "num_base_bdevs": 3, 00:09:54.471 "num_base_bdevs_discovered": 3, 00:09:54.471 "num_base_bdevs_operational": 3, 00:09:54.471 "base_bdevs_list": [ 00:09:54.471 { 00:09:54.471 "name": "BaseBdev1", 00:09:54.471 "uuid": "485be620-c473-5356-a5eb-e7d2c3646561", 00:09:54.471 "is_configured": true, 00:09:54.471 "data_offset": 2048, 00:09:54.471 "data_size": 63488 00:09:54.471 }, 00:09:54.471 { 00:09:54.471 "name": "BaseBdev2", 00:09:54.471 "uuid": "64ab5b89-c74b-5e17-8e00-0c996e59de11", 00:09:54.471 "is_configured": true, 00:09:54.472 "data_offset": 2048, 00:09:54.472 "data_size": 63488 00:09:54.472 }, 00:09:54.472 { 00:09:54.472 "name": "BaseBdev3", 00:09:54.472 "uuid": "9514a3c8-c5c1-5d00-abe5-8063d70efd3b", 00:09:54.472 "is_configured": true, 00:09:54.472 "data_offset": 2048, 00:09:54.472 "data_size": 63488 00:09:54.472 } 00:09:54.472 ] 00:09:54.472 }' 00:09:54.472 03:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.472 03:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.041 03:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.041 03:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:55.041 [2024-11-05 03:21:08.561387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.978 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.979 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.979 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.979 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.979 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.979 "name": "raid_bdev1", 00:09:55.979 "uuid": "a51c999a-4f11-40e7-9092-59792e70c9de", 00:09:55.979 "strip_size_kb": 64, 00:09:55.979 "state": "online", 00:09:55.979 "raid_level": "raid0", 00:09:55.979 "superblock": true, 00:09:55.979 "num_base_bdevs": 3, 00:09:55.979 "num_base_bdevs_discovered": 3, 00:09:55.979 "num_base_bdevs_operational": 3, 00:09:55.979 "base_bdevs_list": [ 00:09:55.979 { 00:09:55.979 "name": "BaseBdev1", 00:09:55.979 "uuid": "485be620-c473-5356-a5eb-e7d2c3646561", 00:09:55.979 "is_configured": true, 00:09:55.979 "data_offset": 2048, 00:09:55.979 "data_size": 63488 00:09:55.979 }, 00:09:55.979 { 00:09:55.979 "name": "BaseBdev2", 00:09:55.979 "uuid": "64ab5b89-c74b-5e17-8e00-0c996e59de11", 00:09:55.979 "is_configured": true, 00:09:55.979 "data_offset": 2048, 00:09:55.979 "data_size": 63488 00:09:55.979 }, 00:09:55.979 { 00:09:55.979 "name": "BaseBdev3", 00:09:55.979 "uuid": "9514a3c8-c5c1-5d00-abe5-8063d70efd3b", 00:09:55.979 "is_configured": true, 00:09:55.979 "data_offset": 2048, 00:09:55.979 "data_size": 63488 00:09:55.979 } 00:09:55.979 ] 00:09:55.979 }' 00:09:55.979 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.979 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.545 [2024-11-05 03:21:09.983934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.545 [2024-11-05 03:21:09.984138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.545 [2024-11-05 03:21:09.987524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.545 [2024-11-05 03:21:09.987576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.545 [2024-11-05 03:21:09.987624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.545 [2024-11-05 03:21:09.987637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:56.545 { 00:09:56.545 "results": [ 00:09:56.545 { 00:09:56.545 "job": "raid_bdev1", 00:09:56.545 "core_mask": "0x1", 00:09:56.545 "workload": "randrw", 00:09:56.545 "percentage": 50, 00:09:56.545 "status": "finished", 00:09:56.545 "queue_depth": 1, 00:09:56.545 "io_size": 131072, 00:09:56.545 "runtime": 1.419996, 00:09:56.545 "iops": 11173.270910622283, 00:09:56.545 "mibps": 1396.6588638277854, 00:09:56.545 "io_failed": 1, 00:09:56.545 "io_timeout": 0, 00:09:56.545 "avg_latency_us": 125.34020683293515, 00:09:56.545 "min_latency_us": 28.16, 00:09:56.545 "max_latency_us": 1854.370909090909 00:09:56.545 } 00:09:56.545 ], 00:09:56.545 "core_count": 1 00:09:56.545 } 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65259 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65259 ']' 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65259 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:56.545 03:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65259 00:09:56.545 killing process with pid 65259 00:09:56.545 03:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:56.545 03:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:56.545 03:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65259' 00:09:56.545 03:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65259 00:09:56.545 [2024-11-05 03:21:10.024807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.545 03:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65259 00:09:56.805 [2024-11-05 03:21:10.208706] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fwi0iys02B 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:57.761 00:09:57.761 real 0m4.550s 00:09:57.761 user 0m5.689s 00:09:57.761 sys 0m0.575s 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.761 ************************************ 00:09:57.761 END TEST raid_write_error_test 00:09:57.761 ************************************ 00:09:57.761 03:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.761 03:21:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.761 03:21:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:57.761 03:21:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:57.761 03:21:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.761 03:21:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.761 ************************************ 00:09:57.761 START TEST raid_state_function_test 00:09:57.761 ************************************ 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.761 Process raid pid: 65397 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65397 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65397' 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65397 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65397 ']' 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.761 03:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.761 [2024-11-05 03:21:11.362878] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:57.761 [2024-11-05 03:21:11.363083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.020 [2024-11-05 03:21:11.549628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.279 [2024-11-05 03:21:11.665712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.279 [2024-11-05 03:21:11.865701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.279 [2024-11-05 03:21:11.865951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 [2024-11-05 03:21:12.336823] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.848 [2024-11-05 03:21:12.337054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.848 [2024-11-05 03:21:12.337082] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.848 [2024-11-05 03:21:12.337099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.848 [2024-11-05 03:21:12.337109] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.848 [2024-11-05 03:21:12.337122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.848 "name": "Existed_Raid", 00:09:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.848 "strip_size_kb": 64, 00:09:58.848 "state": "configuring", 00:09:58.848 "raid_level": "concat", 00:09:58.848 "superblock": false, 00:09:58.848 "num_base_bdevs": 3, 00:09:58.848 "num_base_bdevs_discovered": 0, 00:09:58.848 "num_base_bdevs_operational": 3, 00:09:58.848 "base_bdevs_list": [ 00:09:58.848 { 00:09:58.848 "name": "BaseBdev1", 00:09:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.848 "is_configured": false, 00:09:58.848 "data_offset": 0, 00:09:58.848 "data_size": 0 00:09:58.848 }, 00:09:58.848 { 00:09:58.848 "name": "BaseBdev2", 00:09:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.848 "is_configured": false, 00:09:58.848 "data_offset": 0, 00:09:58.848 "data_size": 0 00:09:58.848 }, 00:09:58.848 { 00:09:58.848 "name": "BaseBdev3", 00:09:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.848 "is_configured": false, 00:09:58.848 "data_offset": 0, 00:09:58.848 "data_size": 0 00:09:58.848 } 00:09:58.848 ] 00:09:58.848 }' 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.848 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [2024-11-05 03:21:12.872995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.418 [2024-11-05 03:21:12.873053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [2024-11-05 03:21:12.884989] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.418 [2024-11-05 03:21:12.885061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.418 [2024-11-05 03:21:12.885075] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.418 [2024-11-05 03:21:12.885089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.418 [2024-11-05 03:21:12.885097] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.418 [2024-11-05 03:21:12.885110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [2024-11-05 03:21:12.928147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.418 BaseBdev1 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [ 00:09:59.418 { 00:09:59.418 "name": "BaseBdev1", 00:09:59.418 "aliases": [ 00:09:59.418 "88607764-5a12-46cc-a34f-23f271d077bb" 00:09:59.418 ], 00:09:59.418 "product_name": "Malloc disk", 00:09:59.418 "block_size": 512, 00:09:59.418 "num_blocks": 65536, 00:09:59.418 "uuid": "88607764-5a12-46cc-a34f-23f271d077bb", 00:09:59.418 "assigned_rate_limits": { 00:09:59.418 "rw_ios_per_sec": 0, 00:09:59.418 "rw_mbytes_per_sec": 0, 00:09:59.418 "r_mbytes_per_sec": 0, 00:09:59.418 "w_mbytes_per_sec": 0 00:09:59.418 }, 00:09:59.418 "claimed": true, 00:09:59.418 "claim_type": "exclusive_write", 00:09:59.418 "zoned": false, 00:09:59.418 "supported_io_types": { 00:09:59.418 "read": true, 00:09:59.418 "write": true, 00:09:59.418 "unmap": true, 00:09:59.418 "flush": true, 00:09:59.418 "reset": true, 00:09:59.418 "nvme_admin": false, 00:09:59.418 "nvme_io": false, 00:09:59.418 "nvme_io_md": false, 00:09:59.418 "write_zeroes": true, 00:09:59.418 "zcopy": true, 00:09:59.418 "get_zone_info": false, 00:09:59.418 "zone_management": false, 00:09:59.418 "zone_append": false, 00:09:59.418 "compare": false, 00:09:59.418 "compare_and_write": false, 00:09:59.418 "abort": true, 00:09:59.418 "seek_hole": false, 00:09:59.418 "seek_data": false, 00:09:59.418 "copy": true, 00:09:59.418 "nvme_iov_md": false 00:09:59.418 }, 00:09:59.418 "memory_domains": [ 00:09:59.418 { 00:09:59.418 "dma_device_id": "system", 00:09:59.418 "dma_device_type": 1 00:09:59.418 }, 00:09:59.418 { 00:09:59.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.418 "dma_device_type": 2 00:09:59.418 } 00:09:59.418 ], 00:09:59.418 "driver_specific": {} 00:09:59.418 } 00:09:59.418 ] 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.418 03:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.418 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.418 "name": "Existed_Raid", 00:09:59.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.418 "strip_size_kb": 64, 00:09:59.418 "state": "configuring", 00:09:59.418 "raid_level": "concat", 00:09:59.418 "superblock": false, 00:09:59.418 "num_base_bdevs": 3, 00:09:59.418 "num_base_bdevs_discovered": 1, 00:09:59.418 "num_base_bdevs_operational": 3, 00:09:59.418 "base_bdevs_list": [ 00:09:59.418 { 00:09:59.418 "name": "BaseBdev1", 00:09:59.418 "uuid": "88607764-5a12-46cc-a34f-23f271d077bb", 00:09:59.418 "is_configured": true, 00:09:59.418 "data_offset": 0, 00:09:59.418 "data_size": 65536 00:09:59.418 }, 00:09:59.418 { 00:09:59.418 "name": "BaseBdev2", 00:09:59.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.418 "is_configured": false, 00:09:59.418 "data_offset": 0, 00:09:59.418 "data_size": 0 00:09:59.418 }, 00:09:59.418 { 00:09:59.418 "name": "BaseBdev3", 00:09:59.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.418 "is_configured": false, 00:09:59.418 "data_offset": 0, 00:09:59.418 "data_size": 0 00:09:59.418 } 00:09:59.418 ] 00:09:59.418 }' 00:09:59.418 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.418 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.986 [2024-11-05 03:21:13.472291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.986 [2024-11-05 03:21:13.472401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.986 [2024-11-05 03:21:13.480404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.986 [2024-11-05 03:21:13.483101] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.986 [2024-11-05 03:21:13.483169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.986 [2024-11-05 03:21:13.483184] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.986 [2024-11-05 03:21:13.483197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.986 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.986 "name": "Existed_Raid", 00:09:59.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.986 "strip_size_kb": 64, 00:09:59.986 "state": "configuring", 00:09:59.986 "raid_level": "concat", 00:09:59.986 "superblock": false, 00:09:59.986 "num_base_bdevs": 3, 00:09:59.987 "num_base_bdevs_discovered": 1, 00:09:59.987 "num_base_bdevs_operational": 3, 00:09:59.987 "base_bdevs_list": [ 00:09:59.987 { 00:09:59.987 "name": "BaseBdev1", 00:09:59.987 "uuid": "88607764-5a12-46cc-a34f-23f271d077bb", 00:09:59.987 "is_configured": true, 00:09:59.987 "data_offset": 0, 00:09:59.987 "data_size": 65536 00:09:59.987 }, 00:09:59.987 { 00:09:59.987 "name": "BaseBdev2", 00:09:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.987 "is_configured": false, 00:09:59.987 "data_offset": 0, 00:09:59.987 "data_size": 0 00:09:59.987 }, 00:09:59.987 { 00:09:59.987 "name": "BaseBdev3", 00:09:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.987 "is_configured": false, 00:09:59.987 "data_offset": 0, 00:09:59.987 "data_size": 0 00:09:59.987 } 00:09:59.987 ] 00:09:59.987 }' 00:09:59.987 03:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.987 03:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.555 [2024-11-05 03:21:14.061662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.555 BaseBdev2 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.555 [ 00:10:00.555 { 00:10:00.555 "name": "BaseBdev2", 00:10:00.555 "aliases": [ 00:10:00.555 "9d9168f4-c41b-4861-a341-5ec53b33283b" 00:10:00.555 ], 00:10:00.555 "product_name": "Malloc disk", 00:10:00.555 "block_size": 512, 00:10:00.555 "num_blocks": 65536, 00:10:00.555 "uuid": "9d9168f4-c41b-4861-a341-5ec53b33283b", 00:10:00.555 "assigned_rate_limits": { 00:10:00.555 "rw_ios_per_sec": 0, 00:10:00.555 "rw_mbytes_per_sec": 0, 00:10:00.555 "r_mbytes_per_sec": 0, 00:10:00.555 "w_mbytes_per_sec": 0 00:10:00.555 }, 00:10:00.555 "claimed": true, 00:10:00.555 "claim_type": "exclusive_write", 00:10:00.555 "zoned": false, 00:10:00.555 "supported_io_types": { 00:10:00.555 "read": true, 00:10:00.555 "write": true, 00:10:00.555 "unmap": true, 00:10:00.555 "flush": true, 00:10:00.555 "reset": true, 00:10:00.555 "nvme_admin": false, 00:10:00.555 "nvme_io": false, 00:10:00.555 "nvme_io_md": false, 00:10:00.555 "write_zeroes": true, 00:10:00.555 "zcopy": true, 00:10:00.555 "get_zone_info": false, 00:10:00.555 "zone_management": false, 00:10:00.555 "zone_append": false, 00:10:00.555 "compare": false, 00:10:00.555 "compare_and_write": false, 00:10:00.555 "abort": true, 00:10:00.555 "seek_hole": false, 00:10:00.555 "seek_data": false, 00:10:00.555 "copy": true, 00:10:00.555 "nvme_iov_md": false 00:10:00.555 }, 00:10:00.555 "memory_domains": [ 00:10:00.555 { 00:10:00.555 "dma_device_id": "system", 00:10:00.555 "dma_device_type": 1 00:10:00.555 }, 00:10:00.555 { 00:10:00.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.555 "dma_device_type": 2 00:10:00.555 } 00:10:00.555 ], 00:10:00.555 "driver_specific": {} 00:10:00.555 } 00:10:00.555 ] 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.555 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.556 "name": "Existed_Raid", 00:10:00.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.556 "strip_size_kb": 64, 00:10:00.556 "state": "configuring", 00:10:00.556 "raid_level": "concat", 00:10:00.556 "superblock": false, 00:10:00.556 "num_base_bdevs": 3, 00:10:00.556 "num_base_bdevs_discovered": 2, 00:10:00.556 "num_base_bdevs_operational": 3, 00:10:00.556 "base_bdevs_list": [ 00:10:00.556 { 00:10:00.556 "name": "BaseBdev1", 00:10:00.556 "uuid": "88607764-5a12-46cc-a34f-23f271d077bb", 00:10:00.556 "is_configured": true, 00:10:00.556 "data_offset": 0, 00:10:00.556 "data_size": 65536 00:10:00.556 }, 00:10:00.556 { 00:10:00.556 "name": "BaseBdev2", 00:10:00.556 "uuid": "9d9168f4-c41b-4861-a341-5ec53b33283b", 00:10:00.556 "is_configured": true, 00:10:00.556 "data_offset": 0, 00:10:00.556 "data_size": 65536 00:10:00.556 }, 00:10:00.556 { 00:10:00.556 "name": "BaseBdev3", 00:10:00.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.556 "is_configured": false, 00:10:00.556 "data_offset": 0, 00:10:00.556 "data_size": 0 00:10:00.556 } 00:10:00.556 ] 00:10:00.556 }' 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.556 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.124 [2024-11-05 03:21:14.670488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.124 [2024-11-05 03:21:14.670539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:01.124 [2024-11-05 03:21:14.670564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:01.124 [2024-11-05 03:21:14.670911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:01.124 [2024-11-05 03:21:14.671099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:01.124 [2024-11-05 03:21:14.671115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:01.124 [2024-11-05 03:21:14.671470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.124 BaseBdev3 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.124 [ 00:10:01.124 { 00:10:01.124 "name": "BaseBdev3", 00:10:01.124 "aliases": [ 00:10:01.124 "7b47a61b-add5-4bac-a479-2818e55dbf22" 00:10:01.124 ], 00:10:01.124 "product_name": "Malloc disk", 00:10:01.124 "block_size": 512, 00:10:01.124 "num_blocks": 65536, 00:10:01.124 "uuid": "7b47a61b-add5-4bac-a479-2818e55dbf22", 00:10:01.124 "assigned_rate_limits": { 00:10:01.124 "rw_ios_per_sec": 0, 00:10:01.124 "rw_mbytes_per_sec": 0, 00:10:01.124 "r_mbytes_per_sec": 0, 00:10:01.124 "w_mbytes_per_sec": 0 00:10:01.124 }, 00:10:01.124 "claimed": true, 00:10:01.124 "claim_type": "exclusive_write", 00:10:01.124 "zoned": false, 00:10:01.124 "supported_io_types": { 00:10:01.124 "read": true, 00:10:01.124 "write": true, 00:10:01.124 "unmap": true, 00:10:01.124 "flush": true, 00:10:01.124 "reset": true, 00:10:01.124 "nvme_admin": false, 00:10:01.124 "nvme_io": false, 00:10:01.124 "nvme_io_md": false, 00:10:01.124 "write_zeroes": true, 00:10:01.124 "zcopy": true, 00:10:01.124 "get_zone_info": false, 00:10:01.124 "zone_management": false, 00:10:01.124 "zone_append": false, 00:10:01.124 "compare": false, 00:10:01.124 "compare_and_write": false, 00:10:01.124 "abort": true, 00:10:01.124 "seek_hole": false, 00:10:01.124 "seek_data": false, 00:10:01.124 "copy": true, 00:10:01.124 "nvme_iov_md": false 00:10:01.124 }, 00:10:01.124 "memory_domains": [ 00:10:01.124 { 00:10:01.124 "dma_device_id": "system", 00:10:01.124 "dma_device_type": 1 00:10:01.124 }, 00:10:01.124 { 00:10:01.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.124 "dma_device_type": 2 00:10:01.124 } 00:10:01.124 ], 00:10:01.124 "driver_specific": {} 00:10:01.124 } 00:10:01.124 ] 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.124 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.384 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.384 "name": "Existed_Raid", 00:10:01.384 "uuid": "10a0714e-bf9f-491e-b821-156ae0411840", 00:10:01.384 "strip_size_kb": 64, 00:10:01.384 "state": "online", 00:10:01.384 "raid_level": "concat", 00:10:01.384 "superblock": false, 00:10:01.384 "num_base_bdevs": 3, 00:10:01.384 "num_base_bdevs_discovered": 3, 00:10:01.384 "num_base_bdevs_operational": 3, 00:10:01.384 "base_bdevs_list": [ 00:10:01.384 { 00:10:01.384 "name": "BaseBdev1", 00:10:01.384 "uuid": "88607764-5a12-46cc-a34f-23f271d077bb", 00:10:01.384 "is_configured": true, 00:10:01.384 "data_offset": 0, 00:10:01.384 "data_size": 65536 00:10:01.384 }, 00:10:01.384 { 00:10:01.384 "name": "BaseBdev2", 00:10:01.384 "uuid": "9d9168f4-c41b-4861-a341-5ec53b33283b", 00:10:01.384 "is_configured": true, 00:10:01.384 "data_offset": 0, 00:10:01.384 "data_size": 65536 00:10:01.384 }, 00:10:01.384 { 00:10:01.384 "name": "BaseBdev3", 00:10:01.384 "uuid": "7b47a61b-add5-4bac-a479-2818e55dbf22", 00:10:01.384 "is_configured": true, 00:10:01.384 "data_offset": 0, 00:10:01.384 "data_size": 65536 00:10:01.384 } 00:10:01.384 ] 00:10:01.384 }' 00:10:01.384 03:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.384 03:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.643 [2024-11-05 03:21:15.251062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.643 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.902 "name": "Existed_Raid", 00:10:01.902 "aliases": [ 00:10:01.902 "10a0714e-bf9f-491e-b821-156ae0411840" 00:10:01.902 ], 00:10:01.902 "product_name": "Raid Volume", 00:10:01.902 "block_size": 512, 00:10:01.902 "num_blocks": 196608, 00:10:01.902 "uuid": "10a0714e-bf9f-491e-b821-156ae0411840", 00:10:01.902 "assigned_rate_limits": { 00:10:01.902 "rw_ios_per_sec": 0, 00:10:01.902 "rw_mbytes_per_sec": 0, 00:10:01.902 "r_mbytes_per_sec": 0, 00:10:01.902 "w_mbytes_per_sec": 0 00:10:01.902 }, 00:10:01.902 "claimed": false, 00:10:01.902 "zoned": false, 00:10:01.902 "supported_io_types": { 00:10:01.902 "read": true, 00:10:01.902 "write": true, 00:10:01.902 "unmap": true, 00:10:01.902 "flush": true, 00:10:01.902 "reset": true, 00:10:01.902 "nvme_admin": false, 00:10:01.902 "nvme_io": false, 00:10:01.902 "nvme_io_md": false, 00:10:01.902 "write_zeroes": true, 00:10:01.902 "zcopy": false, 00:10:01.902 "get_zone_info": false, 00:10:01.902 "zone_management": false, 00:10:01.902 "zone_append": false, 00:10:01.902 "compare": false, 00:10:01.902 "compare_and_write": false, 00:10:01.902 "abort": false, 00:10:01.902 "seek_hole": false, 00:10:01.902 "seek_data": false, 00:10:01.902 "copy": false, 00:10:01.902 "nvme_iov_md": false 00:10:01.902 }, 00:10:01.902 "memory_domains": [ 00:10:01.902 { 00:10:01.902 "dma_device_id": "system", 00:10:01.902 "dma_device_type": 1 00:10:01.902 }, 00:10:01.902 { 00:10:01.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.902 "dma_device_type": 2 00:10:01.902 }, 00:10:01.902 { 00:10:01.902 "dma_device_id": "system", 00:10:01.902 "dma_device_type": 1 00:10:01.902 }, 00:10:01.902 { 00:10:01.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.902 "dma_device_type": 2 00:10:01.902 }, 00:10:01.902 { 00:10:01.902 "dma_device_id": "system", 00:10:01.902 "dma_device_type": 1 00:10:01.902 }, 00:10:01.902 { 00:10:01.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.902 "dma_device_type": 2 00:10:01.902 } 00:10:01.902 ], 00:10:01.902 "driver_specific": { 00:10:01.902 "raid": { 00:10:01.902 "uuid": "10a0714e-bf9f-491e-b821-156ae0411840", 00:10:01.902 "strip_size_kb": 64, 00:10:01.902 "state": "online", 00:10:01.902 "raid_level": "concat", 00:10:01.902 "superblock": false, 00:10:01.902 "num_base_bdevs": 3, 00:10:01.902 "num_base_bdevs_discovered": 3, 00:10:01.902 "num_base_bdevs_operational": 3, 00:10:01.902 "base_bdevs_list": [ 00:10:01.902 { 00:10:01.902 "name": "BaseBdev1", 00:10:01.902 "uuid": "88607764-5a12-46cc-a34f-23f271d077bb", 00:10:01.902 "is_configured": true, 00:10:01.902 "data_offset": 0, 00:10:01.902 "data_size": 65536 00:10:01.902 }, 00:10:01.902 { 00:10:01.902 "name": "BaseBdev2", 00:10:01.902 "uuid": "9d9168f4-c41b-4861-a341-5ec53b33283b", 00:10:01.902 "is_configured": true, 00:10:01.902 "data_offset": 0, 00:10:01.902 "data_size": 65536 00:10:01.902 }, 00:10:01.902 { 00:10:01.902 "name": "BaseBdev3", 00:10:01.902 "uuid": "7b47a61b-add5-4bac-a479-2818e55dbf22", 00:10:01.902 "is_configured": true, 00:10:01.902 "data_offset": 0, 00:10:01.902 "data_size": 65536 00:10:01.902 } 00:10:01.902 ] 00:10:01.902 } 00:10:01.902 } 00:10:01.902 }' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.902 BaseBdev2 00:10:01.902 BaseBdev3' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.902 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.903 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.162 [2024-11-05 03:21:15.558882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.162 [2024-11-05 03:21:15.558915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.162 [2024-11-05 03:21:15.558977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.162 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.163 "name": "Existed_Raid", 00:10:02.163 "uuid": "10a0714e-bf9f-491e-b821-156ae0411840", 00:10:02.163 "strip_size_kb": 64, 00:10:02.163 "state": "offline", 00:10:02.163 "raid_level": "concat", 00:10:02.163 "superblock": false, 00:10:02.163 "num_base_bdevs": 3, 00:10:02.163 "num_base_bdevs_discovered": 2, 00:10:02.163 "num_base_bdevs_operational": 2, 00:10:02.163 "base_bdevs_list": [ 00:10:02.163 { 00:10:02.163 "name": null, 00:10:02.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.163 "is_configured": false, 00:10:02.163 "data_offset": 0, 00:10:02.163 "data_size": 65536 00:10:02.163 }, 00:10:02.163 { 00:10:02.163 "name": "BaseBdev2", 00:10:02.163 "uuid": "9d9168f4-c41b-4861-a341-5ec53b33283b", 00:10:02.163 "is_configured": true, 00:10:02.163 "data_offset": 0, 00:10:02.163 "data_size": 65536 00:10:02.163 }, 00:10:02.163 { 00:10:02.163 "name": "BaseBdev3", 00:10:02.163 "uuid": "7b47a61b-add5-4bac-a479-2818e55dbf22", 00:10:02.163 "is_configured": true, 00:10:02.163 "data_offset": 0, 00:10:02.163 "data_size": 65536 00:10:02.163 } 00:10:02.163 ] 00:10:02.163 }' 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.163 03:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 [2024-11-05 03:21:16.226918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.731 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 [2024-11-05 03:21:16.357274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.731 [2024-11-05 03:21:16.357347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.999 BaseBdev2 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.999 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.000 [ 00:10:03.000 { 00:10:03.000 "name": "BaseBdev2", 00:10:03.000 "aliases": [ 00:10:03.000 "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3" 00:10:03.000 ], 00:10:03.000 "product_name": "Malloc disk", 00:10:03.000 "block_size": 512, 00:10:03.000 "num_blocks": 65536, 00:10:03.000 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:03.000 "assigned_rate_limits": { 00:10:03.000 "rw_ios_per_sec": 0, 00:10:03.000 "rw_mbytes_per_sec": 0, 00:10:03.000 "r_mbytes_per_sec": 0, 00:10:03.000 "w_mbytes_per_sec": 0 00:10:03.000 }, 00:10:03.000 "claimed": false, 00:10:03.000 "zoned": false, 00:10:03.000 "supported_io_types": { 00:10:03.000 "read": true, 00:10:03.000 "write": true, 00:10:03.000 "unmap": true, 00:10:03.000 "flush": true, 00:10:03.000 "reset": true, 00:10:03.000 "nvme_admin": false, 00:10:03.000 "nvme_io": false, 00:10:03.000 "nvme_io_md": false, 00:10:03.000 "write_zeroes": true, 00:10:03.000 "zcopy": true, 00:10:03.000 "get_zone_info": false, 00:10:03.000 "zone_management": false, 00:10:03.000 "zone_append": false, 00:10:03.000 "compare": false, 00:10:03.000 "compare_and_write": false, 00:10:03.000 "abort": true, 00:10:03.000 "seek_hole": false, 00:10:03.000 "seek_data": false, 00:10:03.000 "copy": true, 00:10:03.000 "nvme_iov_md": false 00:10:03.000 }, 00:10:03.000 "memory_domains": [ 00:10:03.000 { 00:10:03.000 "dma_device_id": "system", 00:10:03.000 "dma_device_type": 1 00:10:03.000 }, 00:10:03.000 { 00:10:03.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.000 "dma_device_type": 2 00:10:03.000 } 00:10:03.000 ], 00:10:03.000 "driver_specific": {} 00:10:03.000 } 00:10:03.000 ] 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.000 BaseBdev3 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.000 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.292 [ 00:10:03.292 { 00:10:03.292 "name": "BaseBdev3", 00:10:03.292 "aliases": [ 00:10:03.292 "e6ebbb9f-d45a-4343-b645-a9671afe31f6" 00:10:03.292 ], 00:10:03.292 "product_name": "Malloc disk", 00:10:03.292 "block_size": 512, 00:10:03.292 "num_blocks": 65536, 00:10:03.292 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:03.292 "assigned_rate_limits": { 00:10:03.292 "rw_ios_per_sec": 0, 00:10:03.292 "rw_mbytes_per_sec": 0, 00:10:03.292 "r_mbytes_per_sec": 0, 00:10:03.292 "w_mbytes_per_sec": 0 00:10:03.292 }, 00:10:03.292 "claimed": false, 00:10:03.292 "zoned": false, 00:10:03.292 "supported_io_types": { 00:10:03.292 "read": true, 00:10:03.292 "write": true, 00:10:03.292 "unmap": true, 00:10:03.292 "flush": true, 00:10:03.292 "reset": true, 00:10:03.292 "nvme_admin": false, 00:10:03.292 "nvme_io": false, 00:10:03.292 "nvme_io_md": false, 00:10:03.292 "write_zeroes": true, 00:10:03.292 "zcopy": true, 00:10:03.292 "get_zone_info": false, 00:10:03.292 "zone_management": false, 00:10:03.292 "zone_append": false, 00:10:03.292 "compare": false, 00:10:03.292 "compare_and_write": false, 00:10:03.292 "abort": true, 00:10:03.292 "seek_hole": false, 00:10:03.292 "seek_data": false, 00:10:03.292 "copy": true, 00:10:03.292 "nvme_iov_md": false 00:10:03.292 }, 00:10:03.292 "memory_domains": [ 00:10:03.292 { 00:10:03.292 "dma_device_id": "system", 00:10:03.292 "dma_device_type": 1 00:10:03.292 }, 00:10:03.292 { 00:10:03.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.292 "dma_device_type": 2 00:10:03.292 } 00:10:03.292 ], 00:10:03.292 "driver_specific": {} 00:10:03.292 } 00:10:03.292 ] 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.292 [2024-11-05 03:21:16.650576] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.292 [2024-11-05 03:21:16.650792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.292 [2024-11-05 03:21:16.650926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.292 [2024-11-05 03:21:16.653332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.292 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.293 "name": "Existed_Raid", 00:10:03.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.293 "strip_size_kb": 64, 00:10:03.293 "state": "configuring", 00:10:03.293 "raid_level": "concat", 00:10:03.293 "superblock": false, 00:10:03.293 "num_base_bdevs": 3, 00:10:03.293 "num_base_bdevs_discovered": 2, 00:10:03.293 "num_base_bdevs_operational": 3, 00:10:03.293 "base_bdevs_list": [ 00:10:03.293 { 00:10:03.293 "name": "BaseBdev1", 00:10:03.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.293 "is_configured": false, 00:10:03.293 "data_offset": 0, 00:10:03.293 "data_size": 0 00:10:03.293 }, 00:10:03.293 { 00:10:03.293 "name": "BaseBdev2", 00:10:03.293 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:03.293 "is_configured": true, 00:10:03.293 "data_offset": 0, 00:10:03.293 "data_size": 65536 00:10:03.293 }, 00:10:03.293 { 00:10:03.293 "name": "BaseBdev3", 00:10:03.293 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:03.293 "is_configured": true, 00:10:03.293 "data_offset": 0, 00:10:03.293 "data_size": 65536 00:10:03.293 } 00:10:03.293 ] 00:10:03.293 }' 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.293 03:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.552 [2024-11-05 03:21:17.182769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.552 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.811 "name": "Existed_Raid", 00:10:03.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.811 "strip_size_kb": 64, 00:10:03.811 "state": "configuring", 00:10:03.811 "raid_level": "concat", 00:10:03.811 "superblock": false, 00:10:03.811 "num_base_bdevs": 3, 00:10:03.811 "num_base_bdevs_discovered": 1, 00:10:03.811 "num_base_bdevs_operational": 3, 00:10:03.811 "base_bdevs_list": [ 00:10:03.811 { 00:10:03.811 "name": "BaseBdev1", 00:10:03.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.811 "is_configured": false, 00:10:03.811 "data_offset": 0, 00:10:03.811 "data_size": 0 00:10:03.811 }, 00:10:03.811 { 00:10:03.811 "name": null, 00:10:03.811 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:03.811 "is_configured": false, 00:10:03.811 "data_offset": 0, 00:10:03.811 "data_size": 65536 00:10:03.811 }, 00:10:03.811 { 00:10:03.811 "name": "BaseBdev3", 00:10:03.811 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:03.811 "is_configured": true, 00:10:03.811 "data_offset": 0, 00:10:03.811 "data_size": 65536 00:10:03.811 } 00:10:03.811 ] 00:10:03.811 }' 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.811 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.379 [2024-11-05 03:21:17.802823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.379 BaseBdev1 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.379 [ 00:10:04.379 { 00:10:04.379 "name": "BaseBdev1", 00:10:04.379 "aliases": [ 00:10:04.379 "301cc61b-51fd-4ee1-be8b-d3b66637683a" 00:10:04.379 ], 00:10:04.379 "product_name": "Malloc disk", 00:10:04.379 "block_size": 512, 00:10:04.379 "num_blocks": 65536, 00:10:04.379 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:04.379 "assigned_rate_limits": { 00:10:04.379 "rw_ios_per_sec": 0, 00:10:04.379 "rw_mbytes_per_sec": 0, 00:10:04.379 "r_mbytes_per_sec": 0, 00:10:04.379 "w_mbytes_per_sec": 0 00:10:04.379 }, 00:10:04.379 "claimed": true, 00:10:04.379 "claim_type": "exclusive_write", 00:10:04.379 "zoned": false, 00:10:04.379 "supported_io_types": { 00:10:04.379 "read": true, 00:10:04.379 "write": true, 00:10:04.379 "unmap": true, 00:10:04.379 "flush": true, 00:10:04.379 "reset": true, 00:10:04.379 "nvme_admin": false, 00:10:04.379 "nvme_io": false, 00:10:04.379 "nvme_io_md": false, 00:10:04.379 "write_zeroes": true, 00:10:04.379 "zcopy": true, 00:10:04.379 "get_zone_info": false, 00:10:04.379 "zone_management": false, 00:10:04.379 "zone_append": false, 00:10:04.379 "compare": false, 00:10:04.379 "compare_and_write": false, 00:10:04.379 "abort": true, 00:10:04.379 "seek_hole": false, 00:10:04.379 "seek_data": false, 00:10:04.379 "copy": true, 00:10:04.379 "nvme_iov_md": false 00:10:04.379 }, 00:10:04.379 "memory_domains": [ 00:10:04.379 { 00:10:04.379 "dma_device_id": "system", 00:10:04.379 "dma_device_type": 1 00:10:04.379 }, 00:10:04.379 { 00:10:04.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.379 "dma_device_type": 2 00:10:04.379 } 00:10:04.379 ], 00:10:04.379 "driver_specific": {} 00:10:04.379 } 00:10:04.379 ] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.379 "name": "Existed_Raid", 00:10:04.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.379 "strip_size_kb": 64, 00:10:04.379 "state": "configuring", 00:10:04.379 "raid_level": "concat", 00:10:04.379 "superblock": false, 00:10:04.379 "num_base_bdevs": 3, 00:10:04.379 "num_base_bdevs_discovered": 2, 00:10:04.379 "num_base_bdevs_operational": 3, 00:10:04.379 "base_bdevs_list": [ 00:10:04.379 { 00:10:04.379 "name": "BaseBdev1", 00:10:04.379 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:04.379 "is_configured": true, 00:10:04.379 "data_offset": 0, 00:10:04.379 "data_size": 65536 00:10:04.379 }, 00:10:04.379 { 00:10:04.379 "name": null, 00:10:04.379 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:04.379 "is_configured": false, 00:10:04.379 "data_offset": 0, 00:10:04.379 "data_size": 65536 00:10:04.379 }, 00:10:04.379 { 00:10:04.379 "name": "BaseBdev3", 00:10:04.379 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:04.379 "is_configured": true, 00:10:04.379 "data_offset": 0, 00:10:04.379 "data_size": 65536 00:10:04.379 } 00:10:04.379 ] 00:10:04.379 }' 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.379 03:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 [2024-11-05 03:21:18.439056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.947 "name": "Existed_Raid", 00:10:04.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.947 "strip_size_kb": 64, 00:10:04.947 "state": "configuring", 00:10:04.947 "raid_level": "concat", 00:10:04.947 "superblock": false, 00:10:04.947 "num_base_bdevs": 3, 00:10:04.947 "num_base_bdevs_discovered": 1, 00:10:04.947 "num_base_bdevs_operational": 3, 00:10:04.947 "base_bdevs_list": [ 00:10:04.947 { 00:10:04.947 "name": "BaseBdev1", 00:10:04.947 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:04.947 "is_configured": true, 00:10:04.947 "data_offset": 0, 00:10:04.947 "data_size": 65536 00:10:04.947 }, 00:10:04.947 { 00:10:04.947 "name": null, 00:10:04.947 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:04.947 "is_configured": false, 00:10:04.947 "data_offset": 0, 00:10:04.947 "data_size": 65536 00:10:04.947 }, 00:10:04.947 { 00:10:04.947 "name": null, 00:10:04.947 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:04.947 "is_configured": false, 00:10:04.947 "data_offset": 0, 00:10:04.947 "data_size": 65536 00:10:04.947 } 00:10:04.947 ] 00:10:04.947 }' 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.947 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.515 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.516 [2024-11-05 03:21:18.995207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.516 03:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.516 "name": "Existed_Raid", 00:10:05.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.516 "strip_size_kb": 64, 00:10:05.516 "state": "configuring", 00:10:05.516 "raid_level": "concat", 00:10:05.516 "superblock": false, 00:10:05.516 "num_base_bdevs": 3, 00:10:05.516 "num_base_bdevs_discovered": 2, 00:10:05.516 "num_base_bdevs_operational": 3, 00:10:05.516 "base_bdevs_list": [ 00:10:05.516 { 00:10:05.516 "name": "BaseBdev1", 00:10:05.516 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:05.516 "is_configured": true, 00:10:05.516 "data_offset": 0, 00:10:05.516 "data_size": 65536 00:10:05.516 }, 00:10:05.516 { 00:10:05.516 "name": null, 00:10:05.516 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:05.516 "is_configured": false, 00:10:05.516 "data_offset": 0, 00:10:05.516 "data_size": 65536 00:10:05.516 }, 00:10:05.516 { 00:10:05.516 "name": "BaseBdev3", 00:10:05.516 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:05.516 "is_configured": true, 00:10:05.516 "data_offset": 0, 00:10:05.516 "data_size": 65536 00:10:05.516 } 00:10:05.516 ] 00:10:05.516 }' 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.516 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.091 [2024-11-05 03:21:19.579409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.091 "name": "Existed_Raid", 00:10:06.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.091 "strip_size_kb": 64, 00:10:06.091 "state": "configuring", 00:10:06.091 "raid_level": "concat", 00:10:06.091 "superblock": false, 00:10:06.091 "num_base_bdevs": 3, 00:10:06.091 "num_base_bdevs_discovered": 1, 00:10:06.091 "num_base_bdevs_operational": 3, 00:10:06.091 "base_bdevs_list": [ 00:10:06.091 { 00:10:06.091 "name": null, 00:10:06.091 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:06.091 "is_configured": false, 00:10:06.091 "data_offset": 0, 00:10:06.091 "data_size": 65536 00:10:06.091 }, 00:10:06.091 { 00:10:06.091 "name": null, 00:10:06.091 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:06.091 "is_configured": false, 00:10:06.091 "data_offset": 0, 00:10:06.091 "data_size": 65536 00:10:06.091 }, 00:10:06.091 { 00:10:06.091 "name": "BaseBdev3", 00:10:06.091 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:06.091 "is_configured": true, 00:10:06.091 "data_offset": 0, 00:10:06.091 "data_size": 65536 00:10:06.091 } 00:10:06.091 ] 00:10:06.091 }' 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.091 03:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.660 [2024-11-05 03:21:20.246697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.660 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.919 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.919 "name": "Existed_Raid", 00:10:06.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.919 "strip_size_kb": 64, 00:10:06.919 "state": "configuring", 00:10:06.919 "raid_level": "concat", 00:10:06.919 "superblock": false, 00:10:06.919 "num_base_bdevs": 3, 00:10:06.919 "num_base_bdevs_discovered": 2, 00:10:06.919 "num_base_bdevs_operational": 3, 00:10:06.919 "base_bdevs_list": [ 00:10:06.919 { 00:10:06.919 "name": null, 00:10:06.919 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:06.919 "is_configured": false, 00:10:06.919 "data_offset": 0, 00:10:06.919 "data_size": 65536 00:10:06.919 }, 00:10:06.919 { 00:10:06.919 "name": "BaseBdev2", 00:10:06.919 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:06.919 "is_configured": true, 00:10:06.919 "data_offset": 0, 00:10:06.919 "data_size": 65536 00:10:06.919 }, 00:10:06.919 { 00:10:06.919 "name": "BaseBdev3", 00:10:06.919 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:06.919 "is_configured": true, 00:10:06.919 "data_offset": 0, 00:10:06.919 "data_size": 65536 00:10:06.919 } 00:10:06.919 ] 00:10:06.919 }' 00:10:06.919 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.919 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.179 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.179 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.179 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.179 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.179 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 301cc61b-51fd-4ee1-be8b-d3b66637683a 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.438 [2024-11-05 03:21:20.938320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:07.438 [2024-11-05 03:21:20.938365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:07.438 [2024-11-05 03:21:20.938379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:07.438 [2024-11-05 03:21:20.938757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:07.438 [2024-11-05 03:21:20.938976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:07.438 [2024-11-05 03:21:20.939006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:07.438 [2024-11-05 03:21:20.939293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.438 NewBaseBdev 00:10:07.438 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.439 [ 00:10:07.439 { 00:10:07.439 "name": "NewBaseBdev", 00:10:07.439 "aliases": [ 00:10:07.439 "301cc61b-51fd-4ee1-be8b-d3b66637683a" 00:10:07.439 ], 00:10:07.439 "product_name": "Malloc disk", 00:10:07.439 "block_size": 512, 00:10:07.439 "num_blocks": 65536, 00:10:07.439 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:07.439 "assigned_rate_limits": { 00:10:07.439 "rw_ios_per_sec": 0, 00:10:07.439 "rw_mbytes_per_sec": 0, 00:10:07.439 "r_mbytes_per_sec": 0, 00:10:07.439 "w_mbytes_per_sec": 0 00:10:07.439 }, 00:10:07.439 "claimed": true, 00:10:07.439 "claim_type": "exclusive_write", 00:10:07.439 "zoned": false, 00:10:07.439 "supported_io_types": { 00:10:07.439 "read": true, 00:10:07.439 "write": true, 00:10:07.439 "unmap": true, 00:10:07.439 "flush": true, 00:10:07.439 "reset": true, 00:10:07.439 "nvme_admin": false, 00:10:07.439 "nvme_io": false, 00:10:07.439 "nvme_io_md": false, 00:10:07.439 "write_zeroes": true, 00:10:07.439 "zcopy": true, 00:10:07.439 "get_zone_info": false, 00:10:07.439 "zone_management": false, 00:10:07.439 "zone_append": false, 00:10:07.439 "compare": false, 00:10:07.439 "compare_and_write": false, 00:10:07.439 "abort": true, 00:10:07.439 "seek_hole": false, 00:10:07.439 "seek_data": false, 00:10:07.439 "copy": true, 00:10:07.439 "nvme_iov_md": false 00:10:07.439 }, 00:10:07.439 "memory_domains": [ 00:10:07.439 { 00:10:07.439 "dma_device_id": "system", 00:10:07.439 "dma_device_type": 1 00:10:07.439 }, 00:10:07.439 { 00:10:07.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.439 "dma_device_type": 2 00:10:07.439 } 00:10:07.439 ], 00:10:07.439 "driver_specific": {} 00:10:07.439 } 00:10:07.439 ] 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.439 03:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.439 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.439 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.439 "name": "Existed_Raid", 00:10:07.439 "uuid": "d9e6c267-8b1e-4f7f-a33a-bd2c382a4c24", 00:10:07.439 "strip_size_kb": 64, 00:10:07.439 "state": "online", 00:10:07.439 "raid_level": "concat", 00:10:07.439 "superblock": false, 00:10:07.439 "num_base_bdevs": 3, 00:10:07.439 "num_base_bdevs_discovered": 3, 00:10:07.439 "num_base_bdevs_operational": 3, 00:10:07.439 "base_bdevs_list": [ 00:10:07.439 { 00:10:07.439 "name": "NewBaseBdev", 00:10:07.439 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:07.439 "is_configured": true, 00:10:07.439 "data_offset": 0, 00:10:07.439 "data_size": 65536 00:10:07.439 }, 00:10:07.439 { 00:10:07.439 "name": "BaseBdev2", 00:10:07.439 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:07.439 "is_configured": true, 00:10:07.439 "data_offset": 0, 00:10:07.439 "data_size": 65536 00:10:07.439 }, 00:10:07.439 { 00:10:07.439 "name": "BaseBdev3", 00:10:07.439 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:07.439 "is_configured": true, 00:10:07.439 "data_offset": 0, 00:10:07.439 "data_size": 65536 00:10:07.439 } 00:10:07.439 ] 00:10:07.439 }' 00:10:07.439 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.439 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.006 [2024-11-05 03:21:21.502919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.006 "name": "Existed_Raid", 00:10:08.006 "aliases": [ 00:10:08.006 "d9e6c267-8b1e-4f7f-a33a-bd2c382a4c24" 00:10:08.006 ], 00:10:08.006 "product_name": "Raid Volume", 00:10:08.006 "block_size": 512, 00:10:08.006 "num_blocks": 196608, 00:10:08.006 "uuid": "d9e6c267-8b1e-4f7f-a33a-bd2c382a4c24", 00:10:08.006 "assigned_rate_limits": { 00:10:08.006 "rw_ios_per_sec": 0, 00:10:08.006 "rw_mbytes_per_sec": 0, 00:10:08.006 "r_mbytes_per_sec": 0, 00:10:08.006 "w_mbytes_per_sec": 0 00:10:08.006 }, 00:10:08.006 "claimed": false, 00:10:08.006 "zoned": false, 00:10:08.006 "supported_io_types": { 00:10:08.006 "read": true, 00:10:08.006 "write": true, 00:10:08.006 "unmap": true, 00:10:08.006 "flush": true, 00:10:08.006 "reset": true, 00:10:08.006 "nvme_admin": false, 00:10:08.006 "nvme_io": false, 00:10:08.006 "nvme_io_md": false, 00:10:08.006 "write_zeroes": true, 00:10:08.006 "zcopy": false, 00:10:08.006 "get_zone_info": false, 00:10:08.006 "zone_management": false, 00:10:08.006 "zone_append": false, 00:10:08.006 "compare": false, 00:10:08.006 "compare_and_write": false, 00:10:08.006 "abort": false, 00:10:08.006 "seek_hole": false, 00:10:08.006 "seek_data": false, 00:10:08.006 "copy": false, 00:10:08.006 "nvme_iov_md": false 00:10:08.006 }, 00:10:08.006 "memory_domains": [ 00:10:08.006 { 00:10:08.006 "dma_device_id": "system", 00:10:08.006 "dma_device_type": 1 00:10:08.006 }, 00:10:08.006 { 00:10:08.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.006 "dma_device_type": 2 00:10:08.006 }, 00:10:08.006 { 00:10:08.006 "dma_device_id": "system", 00:10:08.006 "dma_device_type": 1 00:10:08.006 }, 00:10:08.006 { 00:10:08.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.006 "dma_device_type": 2 00:10:08.006 }, 00:10:08.006 { 00:10:08.006 "dma_device_id": "system", 00:10:08.006 "dma_device_type": 1 00:10:08.006 }, 00:10:08.006 { 00:10:08.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.006 "dma_device_type": 2 00:10:08.006 } 00:10:08.006 ], 00:10:08.006 "driver_specific": { 00:10:08.006 "raid": { 00:10:08.006 "uuid": "d9e6c267-8b1e-4f7f-a33a-bd2c382a4c24", 00:10:08.006 "strip_size_kb": 64, 00:10:08.006 "state": "online", 00:10:08.006 "raid_level": "concat", 00:10:08.006 "superblock": false, 00:10:08.006 "num_base_bdevs": 3, 00:10:08.006 "num_base_bdevs_discovered": 3, 00:10:08.006 "num_base_bdevs_operational": 3, 00:10:08.006 "base_bdevs_list": [ 00:10:08.006 { 00:10:08.006 "name": "NewBaseBdev", 00:10:08.006 "uuid": "301cc61b-51fd-4ee1-be8b-d3b66637683a", 00:10:08.006 "is_configured": true, 00:10:08.006 "data_offset": 0, 00:10:08.006 "data_size": 65536 00:10:08.006 }, 00:10:08.006 { 00:10:08.006 "name": "BaseBdev2", 00:10:08.006 "uuid": "65cf962e-7d8d-4f2f-a14a-f6e07e0306f3", 00:10:08.006 "is_configured": true, 00:10:08.006 "data_offset": 0, 00:10:08.006 "data_size": 65536 00:10:08.006 }, 00:10:08.006 { 00:10:08.006 "name": "BaseBdev3", 00:10:08.006 "uuid": "e6ebbb9f-d45a-4343-b645-a9671afe31f6", 00:10:08.006 "is_configured": true, 00:10:08.006 "data_offset": 0, 00:10:08.006 "data_size": 65536 00:10:08.006 } 00:10:08.006 ] 00:10:08.006 } 00:10:08.006 } 00:10:08.006 }' 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:08.006 BaseBdev2 00:10:08.006 BaseBdev3' 00:10:08.006 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.266 [2024-11-05 03:21:21.818623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.266 [2024-11-05 03:21:21.818702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.266 [2024-11-05 03:21:21.818814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.266 [2024-11-05 03:21:21.818873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.266 [2024-11-05 03:21:21.818889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65397 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65397 ']' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65397 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65397 00:10:08.266 killing process with pid 65397 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65397' 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65397 00:10:08.266 [2024-11-05 03:21:21.857449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.266 03:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65397 00:10:08.524 [2024-11-05 03:21:22.103443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.461 00:10:09.461 real 0m11.781s 00:10:09.461 user 0m19.777s 00:10:09.461 sys 0m1.572s 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.461 ************************************ 00:10:09.461 END TEST raid_state_function_test 00:10:09.461 ************************************ 00:10:09.461 03:21:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:09.461 03:21:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:09.461 03:21:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.461 03:21:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.461 ************************************ 00:10:09.461 START TEST raid_state_function_test_sb 00:10:09.461 ************************************ 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:09.461 Process raid pid: 66035 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66035 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66035' 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66035 00:10:09.461 03:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.720 03:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66035 ']' 00:10:09.720 03:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.720 03:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:09.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.720 03:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.720 03:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:09.720 03:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.720 [2024-11-05 03:21:23.204742] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:09.720 [2024-11-05 03:21:23.205588] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.979 [2024-11-05 03:21:23.397822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.979 [2024-11-05 03:21:23.516242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.238 [2024-11-05 03:21:23.709964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.238 [2024-11-05 03:21:23.710001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.806 [2024-11-05 03:21:24.180665] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.806 [2024-11-05 03:21:24.180784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.806 [2024-11-05 03:21:24.180800] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.806 [2024-11-05 03:21:24.180815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.806 [2024-11-05 03:21:24.180824] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.806 [2024-11-05 03:21:24.180837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.806 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.806 "name": "Existed_Raid", 00:10:10.806 "uuid": "b7f5bd7d-9357-476c-a4cb-b3078d8aa6cb", 00:10:10.806 "strip_size_kb": 64, 00:10:10.806 "state": "configuring", 00:10:10.806 "raid_level": "concat", 00:10:10.806 "superblock": true, 00:10:10.806 "num_base_bdevs": 3, 00:10:10.806 "num_base_bdevs_discovered": 0, 00:10:10.806 "num_base_bdevs_operational": 3, 00:10:10.806 "base_bdevs_list": [ 00:10:10.806 { 00:10:10.806 "name": "BaseBdev1", 00:10:10.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.806 "is_configured": false, 00:10:10.806 "data_offset": 0, 00:10:10.806 "data_size": 0 00:10:10.806 }, 00:10:10.806 { 00:10:10.806 "name": "BaseBdev2", 00:10:10.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.806 "is_configured": false, 00:10:10.806 "data_offset": 0, 00:10:10.806 "data_size": 0 00:10:10.807 }, 00:10:10.807 { 00:10:10.807 "name": "BaseBdev3", 00:10:10.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.807 "is_configured": false, 00:10:10.807 "data_offset": 0, 00:10:10.807 "data_size": 0 00:10:10.807 } 00:10:10.807 ] 00:10:10.807 }' 00:10:10.807 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.807 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 [2024-11-05 03:21:24.712806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.376 [2024-11-05 03:21:24.712845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 [2024-11-05 03:21:24.724804] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.376 [2024-11-05 03:21:24.725039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.376 [2024-11-05 03:21:24.725156] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.376 [2024-11-05 03:21:24.725282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.376 [2024-11-05 03:21:24.725446] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.376 [2024-11-05 03:21:24.725489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 [2024-11-05 03:21:24.774460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.376 BaseBdev1 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 [ 00:10:11.376 { 00:10:11.376 "name": "BaseBdev1", 00:10:11.376 "aliases": [ 00:10:11.376 "0b27ac2e-fd3e-405a-bba5-a1b15f0c6f9e" 00:10:11.376 ], 00:10:11.376 "product_name": "Malloc disk", 00:10:11.376 "block_size": 512, 00:10:11.376 "num_blocks": 65536, 00:10:11.376 "uuid": "0b27ac2e-fd3e-405a-bba5-a1b15f0c6f9e", 00:10:11.376 "assigned_rate_limits": { 00:10:11.376 "rw_ios_per_sec": 0, 00:10:11.376 "rw_mbytes_per_sec": 0, 00:10:11.376 "r_mbytes_per_sec": 0, 00:10:11.376 "w_mbytes_per_sec": 0 00:10:11.376 }, 00:10:11.376 "claimed": true, 00:10:11.376 "claim_type": "exclusive_write", 00:10:11.376 "zoned": false, 00:10:11.376 "supported_io_types": { 00:10:11.376 "read": true, 00:10:11.376 "write": true, 00:10:11.376 "unmap": true, 00:10:11.376 "flush": true, 00:10:11.376 "reset": true, 00:10:11.376 "nvme_admin": false, 00:10:11.376 "nvme_io": false, 00:10:11.376 "nvme_io_md": false, 00:10:11.376 "write_zeroes": true, 00:10:11.376 "zcopy": true, 00:10:11.376 "get_zone_info": false, 00:10:11.376 "zone_management": false, 00:10:11.376 "zone_append": false, 00:10:11.376 "compare": false, 00:10:11.376 "compare_and_write": false, 00:10:11.376 "abort": true, 00:10:11.376 "seek_hole": false, 00:10:11.376 "seek_data": false, 00:10:11.376 "copy": true, 00:10:11.376 "nvme_iov_md": false 00:10:11.376 }, 00:10:11.376 "memory_domains": [ 00:10:11.376 { 00:10:11.376 "dma_device_id": "system", 00:10:11.376 "dma_device_type": 1 00:10:11.376 }, 00:10:11.376 { 00:10:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.376 "dma_device_type": 2 00:10:11.376 } 00:10:11.376 ], 00:10:11.376 "driver_specific": {} 00:10:11.376 } 00:10:11.376 ] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.376 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.376 "name": "Existed_Raid", 00:10:11.376 "uuid": "9bd0d334-c0a5-4647-b5b3-d44faef9c384", 00:10:11.376 "strip_size_kb": 64, 00:10:11.376 "state": "configuring", 00:10:11.376 "raid_level": "concat", 00:10:11.376 "superblock": true, 00:10:11.376 "num_base_bdevs": 3, 00:10:11.376 "num_base_bdevs_discovered": 1, 00:10:11.376 "num_base_bdevs_operational": 3, 00:10:11.376 "base_bdevs_list": [ 00:10:11.376 { 00:10:11.376 "name": "BaseBdev1", 00:10:11.376 "uuid": "0b27ac2e-fd3e-405a-bba5-a1b15f0c6f9e", 00:10:11.376 "is_configured": true, 00:10:11.376 "data_offset": 2048, 00:10:11.376 "data_size": 63488 00:10:11.376 }, 00:10:11.376 { 00:10:11.376 "name": "BaseBdev2", 00:10:11.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.376 "is_configured": false, 00:10:11.376 "data_offset": 0, 00:10:11.376 "data_size": 0 00:10:11.376 }, 00:10:11.376 { 00:10:11.376 "name": "BaseBdev3", 00:10:11.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.376 "is_configured": false, 00:10:11.376 "data_offset": 0, 00:10:11.377 "data_size": 0 00:10:11.377 } 00:10:11.377 ] 00:10:11.377 }' 00:10:11.377 03:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.377 03:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.945 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.945 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.945 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.945 [2024-11-05 03:21:25.378738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.945 [2024-11-05 03:21:25.378985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:11.945 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.945 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.946 [2024-11-05 03:21:25.390829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.946 [2024-11-05 03:21:25.393522] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.946 [2024-11-05 03:21:25.393592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.946 [2024-11-05 03:21:25.393607] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.946 [2024-11-05 03:21:25.393621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.946 "name": "Existed_Raid", 00:10:11.946 "uuid": "fa4220c1-f899-41b3-bf66-0b584d3d44f7", 00:10:11.946 "strip_size_kb": 64, 00:10:11.946 "state": "configuring", 00:10:11.946 "raid_level": "concat", 00:10:11.946 "superblock": true, 00:10:11.946 "num_base_bdevs": 3, 00:10:11.946 "num_base_bdevs_discovered": 1, 00:10:11.946 "num_base_bdevs_operational": 3, 00:10:11.946 "base_bdevs_list": [ 00:10:11.946 { 00:10:11.946 "name": "BaseBdev1", 00:10:11.946 "uuid": "0b27ac2e-fd3e-405a-bba5-a1b15f0c6f9e", 00:10:11.946 "is_configured": true, 00:10:11.946 "data_offset": 2048, 00:10:11.946 "data_size": 63488 00:10:11.946 }, 00:10:11.946 { 00:10:11.946 "name": "BaseBdev2", 00:10:11.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.946 "is_configured": false, 00:10:11.946 "data_offset": 0, 00:10:11.946 "data_size": 0 00:10:11.946 }, 00:10:11.946 { 00:10:11.946 "name": "BaseBdev3", 00:10:11.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.946 "is_configured": false, 00:10:11.946 "data_offset": 0, 00:10:11.946 "data_size": 0 00:10:11.946 } 00:10:11.946 ] 00:10:11.946 }' 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.946 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 [2024-11-05 03:21:25.988214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.515 BaseBdev2 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.515 03:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 [ 00:10:12.515 { 00:10:12.515 "name": "BaseBdev2", 00:10:12.515 "aliases": [ 00:10:12.515 "6dc4b757-7537-4afa-afa8-6a5666fba318" 00:10:12.515 ], 00:10:12.515 "product_name": "Malloc disk", 00:10:12.515 "block_size": 512, 00:10:12.515 "num_blocks": 65536, 00:10:12.515 "uuid": "6dc4b757-7537-4afa-afa8-6a5666fba318", 00:10:12.515 "assigned_rate_limits": { 00:10:12.515 "rw_ios_per_sec": 0, 00:10:12.515 "rw_mbytes_per_sec": 0, 00:10:12.515 "r_mbytes_per_sec": 0, 00:10:12.515 "w_mbytes_per_sec": 0 00:10:12.515 }, 00:10:12.515 "claimed": true, 00:10:12.515 "claim_type": "exclusive_write", 00:10:12.515 "zoned": false, 00:10:12.515 "supported_io_types": { 00:10:12.515 "read": true, 00:10:12.515 "write": true, 00:10:12.515 "unmap": true, 00:10:12.515 "flush": true, 00:10:12.515 "reset": true, 00:10:12.515 "nvme_admin": false, 00:10:12.515 "nvme_io": false, 00:10:12.515 "nvme_io_md": false, 00:10:12.515 "write_zeroes": true, 00:10:12.515 "zcopy": true, 00:10:12.515 "get_zone_info": false, 00:10:12.515 "zone_management": false, 00:10:12.515 "zone_append": false, 00:10:12.515 "compare": false, 00:10:12.515 "compare_and_write": false, 00:10:12.515 "abort": true, 00:10:12.515 "seek_hole": false, 00:10:12.515 "seek_data": false, 00:10:12.515 "copy": true, 00:10:12.515 "nvme_iov_md": false 00:10:12.515 }, 00:10:12.515 "memory_domains": [ 00:10:12.515 { 00:10:12.515 "dma_device_id": "system", 00:10:12.515 "dma_device_type": 1 00:10:12.515 }, 00:10:12.515 { 00:10:12.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.515 "dma_device_type": 2 00:10:12.515 } 00:10:12.515 ], 00:10:12.515 "driver_specific": {} 00:10:12.515 } 00:10:12.515 ] 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.515 "name": "Existed_Raid", 00:10:12.515 "uuid": "fa4220c1-f899-41b3-bf66-0b584d3d44f7", 00:10:12.515 "strip_size_kb": 64, 00:10:12.515 "state": "configuring", 00:10:12.515 "raid_level": "concat", 00:10:12.515 "superblock": true, 00:10:12.515 "num_base_bdevs": 3, 00:10:12.515 "num_base_bdevs_discovered": 2, 00:10:12.515 "num_base_bdevs_operational": 3, 00:10:12.515 "base_bdevs_list": [ 00:10:12.515 { 00:10:12.515 "name": "BaseBdev1", 00:10:12.515 "uuid": "0b27ac2e-fd3e-405a-bba5-a1b15f0c6f9e", 00:10:12.515 "is_configured": true, 00:10:12.515 "data_offset": 2048, 00:10:12.515 "data_size": 63488 00:10:12.515 }, 00:10:12.515 { 00:10:12.515 "name": "BaseBdev2", 00:10:12.515 "uuid": "6dc4b757-7537-4afa-afa8-6a5666fba318", 00:10:12.515 "is_configured": true, 00:10:12.515 "data_offset": 2048, 00:10:12.515 "data_size": 63488 00:10:12.515 }, 00:10:12.515 { 00:10:12.515 "name": "BaseBdev3", 00:10:12.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.515 "is_configured": false, 00:10:12.515 "data_offset": 0, 00:10:12.515 "data_size": 0 00:10:12.515 } 00:10:12.515 ] 00:10:12.515 }' 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.515 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.084 [2024-11-05 03:21:26.628748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.084 [2024-11-05 03:21:26.629423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.084 [2024-11-05 03:21:26.629460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:13.084 BaseBdev3 00:10:13.084 [2024-11-05 03:21:26.629878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:13.084 [2024-11-05 03:21:26.630112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.084 [2024-11-05 03:21:26.630142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:13.084 [2024-11-05 03:21:26.630342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.084 [ 00:10:13.084 { 00:10:13.084 "name": "BaseBdev3", 00:10:13.084 "aliases": [ 00:10:13.084 "449745d3-16da-4dab-b48f-b6688a581dee" 00:10:13.084 ], 00:10:13.084 "product_name": "Malloc disk", 00:10:13.084 "block_size": 512, 00:10:13.084 "num_blocks": 65536, 00:10:13.084 "uuid": "449745d3-16da-4dab-b48f-b6688a581dee", 00:10:13.084 "assigned_rate_limits": { 00:10:13.084 "rw_ios_per_sec": 0, 00:10:13.084 "rw_mbytes_per_sec": 0, 00:10:13.084 "r_mbytes_per_sec": 0, 00:10:13.084 "w_mbytes_per_sec": 0 00:10:13.084 }, 00:10:13.084 "claimed": true, 00:10:13.084 "claim_type": "exclusive_write", 00:10:13.084 "zoned": false, 00:10:13.084 "supported_io_types": { 00:10:13.084 "read": true, 00:10:13.084 "write": true, 00:10:13.084 "unmap": true, 00:10:13.084 "flush": true, 00:10:13.084 "reset": true, 00:10:13.084 "nvme_admin": false, 00:10:13.084 "nvme_io": false, 00:10:13.084 "nvme_io_md": false, 00:10:13.084 "write_zeroes": true, 00:10:13.084 "zcopy": true, 00:10:13.084 "get_zone_info": false, 00:10:13.084 "zone_management": false, 00:10:13.084 "zone_append": false, 00:10:13.084 "compare": false, 00:10:13.084 "compare_and_write": false, 00:10:13.084 "abort": true, 00:10:13.084 "seek_hole": false, 00:10:13.084 "seek_data": false, 00:10:13.084 "copy": true, 00:10:13.084 "nvme_iov_md": false 00:10:13.084 }, 00:10:13.084 "memory_domains": [ 00:10:13.084 { 00:10:13.084 "dma_device_id": "system", 00:10:13.084 "dma_device_type": 1 00:10:13.084 }, 00:10:13.084 { 00:10:13.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.084 "dma_device_type": 2 00:10:13.084 } 00:10:13.084 ], 00:10:13.084 "driver_specific": {} 00:10:13.084 } 00:10:13.084 ] 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.084 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.343 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.343 "name": "Existed_Raid", 00:10:13.343 "uuid": "fa4220c1-f899-41b3-bf66-0b584d3d44f7", 00:10:13.343 "strip_size_kb": 64, 00:10:13.343 "state": "online", 00:10:13.343 "raid_level": "concat", 00:10:13.343 "superblock": true, 00:10:13.343 "num_base_bdevs": 3, 00:10:13.343 "num_base_bdevs_discovered": 3, 00:10:13.343 "num_base_bdevs_operational": 3, 00:10:13.344 "base_bdevs_list": [ 00:10:13.344 { 00:10:13.344 "name": "BaseBdev1", 00:10:13.344 "uuid": "0b27ac2e-fd3e-405a-bba5-a1b15f0c6f9e", 00:10:13.344 "is_configured": true, 00:10:13.344 "data_offset": 2048, 00:10:13.344 "data_size": 63488 00:10:13.344 }, 00:10:13.344 { 00:10:13.344 "name": "BaseBdev2", 00:10:13.344 "uuid": "6dc4b757-7537-4afa-afa8-6a5666fba318", 00:10:13.344 "is_configured": true, 00:10:13.344 "data_offset": 2048, 00:10:13.344 "data_size": 63488 00:10:13.344 }, 00:10:13.344 { 00:10:13.344 "name": "BaseBdev3", 00:10:13.344 "uuid": "449745d3-16da-4dab-b48f-b6688a581dee", 00:10:13.344 "is_configured": true, 00:10:13.344 "data_offset": 2048, 00:10:13.344 "data_size": 63488 00:10:13.344 } 00:10:13.344 ] 00:10:13.344 }' 00:10:13.344 03:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.344 03:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.603 [2024-11-05 03:21:27.221460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.603 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.862 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.862 "name": "Existed_Raid", 00:10:13.862 "aliases": [ 00:10:13.862 "fa4220c1-f899-41b3-bf66-0b584d3d44f7" 00:10:13.862 ], 00:10:13.862 "product_name": "Raid Volume", 00:10:13.862 "block_size": 512, 00:10:13.862 "num_blocks": 190464, 00:10:13.862 "uuid": "fa4220c1-f899-41b3-bf66-0b584d3d44f7", 00:10:13.862 "assigned_rate_limits": { 00:10:13.862 "rw_ios_per_sec": 0, 00:10:13.862 "rw_mbytes_per_sec": 0, 00:10:13.862 "r_mbytes_per_sec": 0, 00:10:13.862 "w_mbytes_per_sec": 0 00:10:13.862 }, 00:10:13.862 "claimed": false, 00:10:13.862 "zoned": false, 00:10:13.862 "supported_io_types": { 00:10:13.862 "read": true, 00:10:13.862 "write": true, 00:10:13.862 "unmap": true, 00:10:13.862 "flush": true, 00:10:13.862 "reset": true, 00:10:13.862 "nvme_admin": false, 00:10:13.862 "nvme_io": false, 00:10:13.862 "nvme_io_md": false, 00:10:13.862 "write_zeroes": true, 00:10:13.862 "zcopy": false, 00:10:13.862 "get_zone_info": false, 00:10:13.862 "zone_management": false, 00:10:13.862 "zone_append": false, 00:10:13.862 "compare": false, 00:10:13.862 "compare_and_write": false, 00:10:13.862 "abort": false, 00:10:13.862 "seek_hole": false, 00:10:13.862 "seek_data": false, 00:10:13.862 "copy": false, 00:10:13.862 "nvme_iov_md": false 00:10:13.862 }, 00:10:13.862 "memory_domains": [ 00:10:13.862 { 00:10:13.862 "dma_device_id": "system", 00:10:13.862 "dma_device_type": 1 00:10:13.862 }, 00:10:13.862 { 00:10:13.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.862 "dma_device_type": 2 00:10:13.862 }, 00:10:13.862 { 00:10:13.862 "dma_device_id": "system", 00:10:13.862 "dma_device_type": 1 00:10:13.862 }, 00:10:13.862 { 00:10:13.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.862 "dma_device_type": 2 00:10:13.862 }, 00:10:13.862 { 00:10:13.862 "dma_device_id": "system", 00:10:13.862 "dma_device_type": 1 00:10:13.862 }, 00:10:13.862 { 00:10:13.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.862 "dma_device_type": 2 00:10:13.862 } 00:10:13.862 ], 00:10:13.862 "driver_specific": { 00:10:13.862 "raid": { 00:10:13.862 "uuid": "fa4220c1-f899-41b3-bf66-0b584d3d44f7", 00:10:13.862 "strip_size_kb": 64, 00:10:13.862 "state": "online", 00:10:13.862 "raid_level": "concat", 00:10:13.862 "superblock": true, 00:10:13.862 "num_base_bdevs": 3, 00:10:13.862 "num_base_bdevs_discovered": 3, 00:10:13.862 "num_base_bdevs_operational": 3, 00:10:13.862 "base_bdevs_list": [ 00:10:13.862 { 00:10:13.862 "name": "BaseBdev1", 00:10:13.862 "uuid": "0b27ac2e-fd3e-405a-bba5-a1b15f0c6f9e", 00:10:13.862 "is_configured": true, 00:10:13.862 "data_offset": 2048, 00:10:13.862 "data_size": 63488 00:10:13.862 }, 00:10:13.862 { 00:10:13.862 "name": "BaseBdev2", 00:10:13.862 "uuid": "6dc4b757-7537-4afa-afa8-6a5666fba318", 00:10:13.862 "is_configured": true, 00:10:13.862 "data_offset": 2048, 00:10:13.862 "data_size": 63488 00:10:13.862 }, 00:10:13.862 { 00:10:13.862 "name": "BaseBdev3", 00:10:13.862 "uuid": "449745d3-16da-4dab-b48f-b6688a581dee", 00:10:13.862 "is_configured": true, 00:10:13.862 "data_offset": 2048, 00:10:13.862 "data_size": 63488 00:10:13.862 } 00:10:13.862 ] 00:10:13.862 } 00:10:13.862 } 00:10:13.863 }' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:13.863 BaseBdev2 00:10:13.863 BaseBdev3' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.863 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.122 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.123 [2024-11-05 03:21:27.557162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.123 [2024-11-05 03:21:27.557390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.123 [2024-11-05 03:21:27.557491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.123 "name": "Existed_Raid", 00:10:14.123 "uuid": "fa4220c1-f899-41b3-bf66-0b584d3d44f7", 00:10:14.123 "strip_size_kb": 64, 00:10:14.123 "state": "offline", 00:10:14.123 "raid_level": "concat", 00:10:14.123 "superblock": true, 00:10:14.123 "num_base_bdevs": 3, 00:10:14.123 "num_base_bdevs_discovered": 2, 00:10:14.123 "num_base_bdevs_operational": 2, 00:10:14.123 "base_bdevs_list": [ 00:10:14.123 { 00:10:14.123 "name": null, 00:10:14.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.123 "is_configured": false, 00:10:14.123 "data_offset": 0, 00:10:14.123 "data_size": 63488 00:10:14.123 }, 00:10:14.123 { 00:10:14.123 "name": "BaseBdev2", 00:10:14.123 "uuid": "6dc4b757-7537-4afa-afa8-6a5666fba318", 00:10:14.123 "is_configured": true, 00:10:14.123 "data_offset": 2048, 00:10:14.123 "data_size": 63488 00:10:14.123 }, 00:10:14.123 { 00:10:14.123 "name": "BaseBdev3", 00:10:14.123 "uuid": "449745d3-16da-4dab-b48f-b6688a581dee", 00:10:14.123 "is_configured": true, 00:10:14.123 "data_offset": 2048, 00:10:14.123 "data_size": 63488 00:10:14.123 } 00:10:14.123 ] 00:10:14.123 }' 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.123 03:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.690 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:14.690 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.690 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.690 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.690 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.690 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.690 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.691 [2024-11-05 03:21:28.231577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.691 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 [2024-11-05 03:21:28.369995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.950 [2024-11-05 03:21:28.370053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 BaseBdev2 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 [ 00:10:14.950 { 00:10:14.950 "name": "BaseBdev2", 00:10:14.950 "aliases": [ 00:10:14.950 "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8" 00:10:14.950 ], 00:10:14.950 "product_name": "Malloc disk", 00:10:14.950 "block_size": 512, 00:10:14.950 "num_blocks": 65536, 00:10:14.950 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:14.950 "assigned_rate_limits": { 00:10:14.950 "rw_ios_per_sec": 0, 00:10:14.950 "rw_mbytes_per_sec": 0, 00:10:14.950 "r_mbytes_per_sec": 0, 00:10:14.950 "w_mbytes_per_sec": 0 00:10:14.950 }, 00:10:14.950 "claimed": false, 00:10:14.950 "zoned": false, 00:10:14.950 "supported_io_types": { 00:10:14.950 "read": true, 00:10:14.950 "write": true, 00:10:14.950 "unmap": true, 00:10:14.950 "flush": true, 00:10:14.950 "reset": true, 00:10:14.950 "nvme_admin": false, 00:10:14.950 "nvme_io": false, 00:10:14.950 "nvme_io_md": false, 00:10:14.950 "write_zeroes": true, 00:10:14.950 "zcopy": true, 00:10:14.950 "get_zone_info": false, 00:10:14.950 "zone_management": false, 00:10:14.950 "zone_append": false, 00:10:14.950 "compare": false, 00:10:14.950 "compare_and_write": false, 00:10:14.950 "abort": true, 00:10:14.950 "seek_hole": false, 00:10:14.950 "seek_data": false, 00:10:14.950 "copy": true, 00:10:14.950 "nvme_iov_md": false 00:10:14.950 }, 00:10:14.950 "memory_domains": [ 00:10:14.950 { 00:10:14.950 "dma_device_id": "system", 00:10:14.950 "dma_device_type": 1 00:10:14.950 }, 00:10:14.950 { 00:10:14.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.950 "dma_device_type": 2 00:10:14.950 } 00:10:14.950 ], 00:10:14.950 "driver_specific": {} 00:10:14.950 } 00:10:14.950 ] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.950 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 BaseBdev3 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 [ 00:10:15.209 { 00:10:15.209 "name": "BaseBdev3", 00:10:15.209 "aliases": [ 00:10:15.209 "caed26c6-129a-42d8-acdf-f0c8bd88aaf4" 00:10:15.209 ], 00:10:15.209 "product_name": "Malloc disk", 00:10:15.209 "block_size": 512, 00:10:15.209 "num_blocks": 65536, 00:10:15.209 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:15.209 "assigned_rate_limits": { 00:10:15.209 "rw_ios_per_sec": 0, 00:10:15.209 "rw_mbytes_per_sec": 0, 00:10:15.209 "r_mbytes_per_sec": 0, 00:10:15.209 "w_mbytes_per_sec": 0 00:10:15.209 }, 00:10:15.209 "claimed": false, 00:10:15.209 "zoned": false, 00:10:15.209 "supported_io_types": { 00:10:15.209 "read": true, 00:10:15.209 "write": true, 00:10:15.209 "unmap": true, 00:10:15.209 "flush": true, 00:10:15.209 "reset": true, 00:10:15.209 "nvme_admin": false, 00:10:15.209 "nvme_io": false, 00:10:15.209 "nvme_io_md": false, 00:10:15.209 "write_zeroes": true, 00:10:15.209 "zcopy": true, 00:10:15.209 "get_zone_info": false, 00:10:15.209 "zone_management": false, 00:10:15.209 "zone_append": false, 00:10:15.209 "compare": false, 00:10:15.209 "compare_and_write": false, 00:10:15.209 "abort": true, 00:10:15.209 "seek_hole": false, 00:10:15.209 "seek_data": false, 00:10:15.209 "copy": true, 00:10:15.209 "nvme_iov_md": false 00:10:15.209 }, 00:10:15.209 "memory_domains": [ 00:10:15.209 { 00:10:15.209 "dma_device_id": "system", 00:10:15.209 "dma_device_type": 1 00:10:15.209 }, 00:10:15.209 { 00:10:15.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.209 "dma_device_type": 2 00:10:15.209 } 00:10:15.209 ], 00:10:15.209 "driver_specific": {} 00:10:15.209 } 00:10:15.209 ] 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 [2024-11-05 03:21:28.663207] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.209 [2024-11-05 03:21:28.663497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.209 [2024-11-05 03:21:28.663659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.209 [2024-11-05 03:21:28.666433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.209 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.209 "name": "Existed_Raid", 00:10:15.209 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:15.209 "strip_size_kb": 64, 00:10:15.209 "state": "configuring", 00:10:15.209 "raid_level": "concat", 00:10:15.209 "superblock": true, 00:10:15.209 "num_base_bdevs": 3, 00:10:15.209 "num_base_bdevs_discovered": 2, 00:10:15.209 "num_base_bdevs_operational": 3, 00:10:15.209 "base_bdevs_list": [ 00:10:15.209 { 00:10:15.209 "name": "BaseBdev1", 00:10:15.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.209 "is_configured": false, 00:10:15.209 "data_offset": 0, 00:10:15.209 "data_size": 0 00:10:15.209 }, 00:10:15.209 { 00:10:15.209 "name": "BaseBdev2", 00:10:15.209 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:15.209 "is_configured": true, 00:10:15.209 "data_offset": 2048, 00:10:15.209 "data_size": 63488 00:10:15.209 }, 00:10:15.209 { 00:10:15.209 "name": "BaseBdev3", 00:10:15.209 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:15.209 "is_configured": true, 00:10:15.210 "data_offset": 2048, 00:10:15.210 "data_size": 63488 00:10:15.210 } 00:10:15.210 ] 00:10:15.210 }' 00:10:15.210 03:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.210 03:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.776 [2024-11-05 03:21:29.215401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.776 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.777 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.777 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.777 "name": "Existed_Raid", 00:10:15.777 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:15.777 "strip_size_kb": 64, 00:10:15.777 "state": "configuring", 00:10:15.777 "raid_level": "concat", 00:10:15.777 "superblock": true, 00:10:15.777 "num_base_bdevs": 3, 00:10:15.777 "num_base_bdevs_discovered": 1, 00:10:15.777 "num_base_bdevs_operational": 3, 00:10:15.777 "base_bdevs_list": [ 00:10:15.777 { 00:10:15.777 "name": "BaseBdev1", 00:10:15.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.777 "is_configured": false, 00:10:15.777 "data_offset": 0, 00:10:15.777 "data_size": 0 00:10:15.777 }, 00:10:15.777 { 00:10:15.777 "name": null, 00:10:15.777 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:15.777 "is_configured": false, 00:10:15.777 "data_offset": 0, 00:10:15.777 "data_size": 63488 00:10:15.777 }, 00:10:15.777 { 00:10:15.777 "name": "BaseBdev3", 00:10:15.777 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:15.777 "is_configured": true, 00:10:15.777 "data_offset": 2048, 00:10:15.777 "data_size": 63488 00:10:15.777 } 00:10:15.777 ] 00:10:15.777 }' 00:10:15.777 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.777 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.343 [2024-11-05 03:21:29.840768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.343 BaseBdev1 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.343 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.343 [ 00:10:16.343 { 00:10:16.343 "name": "BaseBdev1", 00:10:16.343 "aliases": [ 00:10:16.343 "abf60c26-2a05-416a-93c8-a7755601d028" 00:10:16.343 ], 00:10:16.343 "product_name": "Malloc disk", 00:10:16.343 "block_size": 512, 00:10:16.343 "num_blocks": 65536, 00:10:16.343 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:16.343 "assigned_rate_limits": { 00:10:16.343 "rw_ios_per_sec": 0, 00:10:16.343 "rw_mbytes_per_sec": 0, 00:10:16.343 "r_mbytes_per_sec": 0, 00:10:16.343 "w_mbytes_per_sec": 0 00:10:16.343 }, 00:10:16.343 "claimed": true, 00:10:16.344 "claim_type": "exclusive_write", 00:10:16.344 "zoned": false, 00:10:16.344 "supported_io_types": { 00:10:16.344 "read": true, 00:10:16.344 "write": true, 00:10:16.344 "unmap": true, 00:10:16.344 "flush": true, 00:10:16.344 "reset": true, 00:10:16.344 "nvme_admin": false, 00:10:16.344 "nvme_io": false, 00:10:16.344 "nvme_io_md": false, 00:10:16.344 "write_zeroes": true, 00:10:16.344 "zcopy": true, 00:10:16.344 "get_zone_info": false, 00:10:16.344 "zone_management": false, 00:10:16.344 "zone_append": false, 00:10:16.344 "compare": false, 00:10:16.344 "compare_and_write": false, 00:10:16.344 "abort": true, 00:10:16.344 "seek_hole": false, 00:10:16.344 "seek_data": false, 00:10:16.344 "copy": true, 00:10:16.344 "nvme_iov_md": false 00:10:16.344 }, 00:10:16.344 "memory_domains": [ 00:10:16.344 { 00:10:16.344 "dma_device_id": "system", 00:10:16.344 "dma_device_type": 1 00:10:16.344 }, 00:10:16.344 { 00:10:16.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.344 "dma_device_type": 2 00:10:16.344 } 00:10:16.344 ], 00:10:16.344 "driver_specific": {} 00:10:16.344 } 00:10:16.344 ] 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.344 "name": "Existed_Raid", 00:10:16.344 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:16.344 "strip_size_kb": 64, 00:10:16.344 "state": "configuring", 00:10:16.344 "raid_level": "concat", 00:10:16.344 "superblock": true, 00:10:16.344 "num_base_bdevs": 3, 00:10:16.344 "num_base_bdevs_discovered": 2, 00:10:16.344 "num_base_bdevs_operational": 3, 00:10:16.344 "base_bdevs_list": [ 00:10:16.344 { 00:10:16.344 "name": "BaseBdev1", 00:10:16.344 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:16.344 "is_configured": true, 00:10:16.344 "data_offset": 2048, 00:10:16.344 "data_size": 63488 00:10:16.344 }, 00:10:16.344 { 00:10:16.344 "name": null, 00:10:16.344 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:16.344 "is_configured": false, 00:10:16.344 "data_offset": 0, 00:10:16.344 "data_size": 63488 00:10:16.344 }, 00:10:16.344 { 00:10:16.344 "name": "BaseBdev3", 00:10:16.344 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:16.344 "is_configured": true, 00:10:16.344 "data_offset": 2048, 00:10:16.344 "data_size": 63488 00:10:16.344 } 00:10:16.344 ] 00:10:16.344 }' 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.344 03:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.911 [2024-11-05 03:21:30.469022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.911 "name": "Existed_Raid", 00:10:16.911 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:16.911 "strip_size_kb": 64, 00:10:16.911 "state": "configuring", 00:10:16.911 "raid_level": "concat", 00:10:16.911 "superblock": true, 00:10:16.911 "num_base_bdevs": 3, 00:10:16.911 "num_base_bdevs_discovered": 1, 00:10:16.911 "num_base_bdevs_operational": 3, 00:10:16.911 "base_bdevs_list": [ 00:10:16.911 { 00:10:16.911 "name": "BaseBdev1", 00:10:16.911 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:16.911 "is_configured": true, 00:10:16.911 "data_offset": 2048, 00:10:16.911 "data_size": 63488 00:10:16.911 }, 00:10:16.911 { 00:10:16.911 "name": null, 00:10:16.911 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:16.911 "is_configured": false, 00:10:16.911 "data_offset": 0, 00:10:16.911 "data_size": 63488 00:10:16.911 }, 00:10:16.911 { 00:10:16.911 "name": null, 00:10:16.911 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:16.911 "is_configured": false, 00:10:16.911 "data_offset": 0, 00:10:16.911 "data_size": 63488 00:10:16.911 } 00:10:16.911 ] 00:10:16.911 }' 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.911 03:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.479 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.479 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.479 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.479 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.480 [2024-11-05 03:21:31.061198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.480 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.738 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.738 "name": "Existed_Raid", 00:10:17.738 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:17.738 "strip_size_kb": 64, 00:10:17.738 "state": "configuring", 00:10:17.738 "raid_level": "concat", 00:10:17.738 "superblock": true, 00:10:17.738 "num_base_bdevs": 3, 00:10:17.738 "num_base_bdevs_discovered": 2, 00:10:17.738 "num_base_bdevs_operational": 3, 00:10:17.738 "base_bdevs_list": [ 00:10:17.738 { 00:10:17.738 "name": "BaseBdev1", 00:10:17.738 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:17.738 "is_configured": true, 00:10:17.738 "data_offset": 2048, 00:10:17.738 "data_size": 63488 00:10:17.738 }, 00:10:17.738 { 00:10:17.738 "name": null, 00:10:17.738 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:17.738 "is_configured": false, 00:10:17.738 "data_offset": 0, 00:10:17.738 "data_size": 63488 00:10:17.738 }, 00:10:17.738 { 00:10:17.738 "name": "BaseBdev3", 00:10:17.738 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:17.738 "is_configured": true, 00:10:17.738 "data_offset": 2048, 00:10:17.738 "data_size": 63488 00:10:17.738 } 00:10:17.738 ] 00:10:17.738 }' 00:10:17.738 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.738 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.997 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.997 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.997 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.997 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.997 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.256 [2024-11-05 03:21:31.657414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.256 "name": "Existed_Raid", 00:10:18.256 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:18.256 "strip_size_kb": 64, 00:10:18.256 "state": "configuring", 00:10:18.256 "raid_level": "concat", 00:10:18.256 "superblock": true, 00:10:18.256 "num_base_bdevs": 3, 00:10:18.256 "num_base_bdevs_discovered": 1, 00:10:18.256 "num_base_bdevs_operational": 3, 00:10:18.256 "base_bdevs_list": [ 00:10:18.256 { 00:10:18.256 "name": null, 00:10:18.256 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:18.256 "is_configured": false, 00:10:18.256 "data_offset": 0, 00:10:18.256 "data_size": 63488 00:10:18.256 }, 00:10:18.256 { 00:10:18.256 "name": null, 00:10:18.256 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:18.256 "is_configured": false, 00:10:18.256 "data_offset": 0, 00:10:18.256 "data_size": 63488 00:10:18.256 }, 00:10:18.256 { 00:10:18.256 "name": "BaseBdev3", 00:10:18.256 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:18.256 "is_configured": true, 00:10:18.256 "data_offset": 2048, 00:10:18.256 "data_size": 63488 00:10:18.256 } 00:10:18.256 ] 00:10:18.256 }' 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.256 03:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.824 [2024-11-05 03:21:32.308713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.824 "name": "Existed_Raid", 00:10:18.824 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:18.824 "strip_size_kb": 64, 00:10:18.824 "state": "configuring", 00:10:18.824 "raid_level": "concat", 00:10:18.824 "superblock": true, 00:10:18.824 "num_base_bdevs": 3, 00:10:18.824 "num_base_bdevs_discovered": 2, 00:10:18.824 "num_base_bdevs_operational": 3, 00:10:18.824 "base_bdevs_list": [ 00:10:18.824 { 00:10:18.824 "name": null, 00:10:18.824 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:18.824 "is_configured": false, 00:10:18.824 "data_offset": 0, 00:10:18.824 "data_size": 63488 00:10:18.824 }, 00:10:18.824 { 00:10:18.824 "name": "BaseBdev2", 00:10:18.824 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:18.824 "is_configured": true, 00:10:18.824 "data_offset": 2048, 00:10:18.824 "data_size": 63488 00:10:18.824 }, 00:10:18.824 { 00:10:18.824 "name": "BaseBdev3", 00:10:18.824 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:18.824 "is_configured": true, 00:10:18.824 "data_offset": 2048, 00:10:18.824 "data_size": 63488 00:10:18.824 } 00:10:18.824 ] 00:10:18.824 }' 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.824 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.391 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.391 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.391 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.391 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.391 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.391 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:19.391 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abf60c26-2a05-416a-93c8-a7755601d028 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.392 [2024-11-05 03:21:32.957441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:19.392 [2024-11-05 03:21:32.957700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.392 [2024-11-05 03:21:32.957723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:19.392 NewBaseBdev 00:10:19.392 [2024-11-05 03:21:32.958070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:19.392 [2024-11-05 03:21:32.958246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.392 [2024-11-05 03:21:32.958277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:19.392 [2024-11-05 03:21:32.958459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.392 [ 00:10:19.392 { 00:10:19.392 "name": "NewBaseBdev", 00:10:19.392 "aliases": [ 00:10:19.392 "abf60c26-2a05-416a-93c8-a7755601d028" 00:10:19.392 ], 00:10:19.392 "product_name": "Malloc disk", 00:10:19.392 "block_size": 512, 00:10:19.392 "num_blocks": 65536, 00:10:19.392 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:19.392 "assigned_rate_limits": { 00:10:19.392 "rw_ios_per_sec": 0, 00:10:19.392 "rw_mbytes_per_sec": 0, 00:10:19.392 "r_mbytes_per_sec": 0, 00:10:19.392 "w_mbytes_per_sec": 0 00:10:19.392 }, 00:10:19.392 "claimed": true, 00:10:19.392 "claim_type": "exclusive_write", 00:10:19.392 "zoned": false, 00:10:19.392 "supported_io_types": { 00:10:19.392 "read": true, 00:10:19.392 "write": true, 00:10:19.392 "unmap": true, 00:10:19.392 "flush": true, 00:10:19.392 "reset": true, 00:10:19.392 "nvme_admin": false, 00:10:19.392 "nvme_io": false, 00:10:19.392 "nvme_io_md": false, 00:10:19.392 "write_zeroes": true, 00:10:19.392 "zcopy": true, 00:10:19.392 "get_zone_info": false, 00:10:19.392 "zone_management": false, 00:10:19.392 "zone_append": false, 00:10:19.392 "compare": false, 00:10:19.392 "compare_and_write": false, 00:10:19.392 "abort": true, 00:10:19.392 "seek_hole": false, 00:10:19.392 "seek_data": false, 00:10:19.392 "copy": true, 00:10:19.392 "nvme_iov_md": false 00:10:19.392 }, 00:10:19.392 "memory_domains": [ 00:10:19.392 { 00:10:19.392 "dma_device_id": "system", 00:10:19.392 "dma_device_type": 1 00:10:19.392 }, 00:10:19.392 { 00:10:19.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.392 "dma_device_type": 2 00:10:19.392 } 00:10:19.392 ], 00:10:19.392 "driver_specific": {} 00:10:19.392 } 00:10:19.392 ] 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.392 03:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.392 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.651 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.651 "name": "Existed_Raid", 00:10:19.651 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:19.651 "strip_size_kb": 64, 00:10:19.651 "state": "online", 00:10:19.651 "raid_level": "concat", 00:10:19.651 "superblock": true, 00:10:19.651 "num_base_bdevs": 3, 00:10:19.651 "num_base_bdevs_discovered": 3, 00:10:19.651 "num_base_bdevs_operational": 3, 00:10:19.651 "base_bdevs_list": [ 00:10:19.651 { 00:10:19.651 "name": "NewBaseBdev", 00:10:19.651 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:19.651 "is_configured": true, 00:10:19.651 "data_offset": 2048, 00:10:19.651 "data_size": 63488 00:10:19.651 }, 00:10:19.651 { 00:10:19.651 "name": "BaseBdev2", 00:10:19.651 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:19.651 "is_configured": true, 00:10:19.651 "data_offset": 2048, 00:10:19.651 "data_size": 63488 00:10:19.651 }, 00:10:19.651 { 00:10:19.651 "name": "BaseBdev3", 00:10:19.651 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:19.651 "is_configured": true, 00:10:19.651 "data_offset": 2048, 00:10:19.651 "data_size": 63488 00:10:19.651 } 00:10:19.651 ] 00:10:19.651 }' 00:10:19.651 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.651 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.909 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.910 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.910 [2024-11-05 03:21:33.522536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.910 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.168 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.168 "name": "Existed_Raid", 00:10:20.168 "aliases": [ 00:10:20.168 "84a1a28c-d51f-4f1e-a492-23766d80127a" 00:10:20.168 ], 00:10:20.168 "product_name": "Raid Volume", 00:10:20.168 "block_size": 512, 00:10:20.168 "num_blocks": 190464, 00:10:20.168 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:20.168 "assigned_rate_limits": { 00:10:20.168 "rw_ios_per_sec": 0, 00:10:20.168 "rw_mbytes_per_sec": 0, 00:10:20.168 "r_mbytes_per_sec": 0, 00:10:20.168 "w_mbytes_per_sec": 0 00:10:20.168 }, 00:10:20.168 "claimed": false, 00:10:20.168 "zoned": false, 00:10:20.168 "supported_io_types": { 00:10:20.168 "read": true, 00:10:20.168 "write": true, 00:10:20.168 "unmap": true, 00:10:20.168 "flush": true, 00:10:20.168 "reset": true, 00:10:20.168 "nvme_admin": false, 00:10:20.168 "nvme_io": false, 00:10:20.168 "nvme_io_md": false, 00:10:20.168 "write_zeroes": true, 00:10:20.168 "zcopy": false, 00:10:20.168 "get_zone_info": false, 00:10:20.168 "zone_management": false, 00:10:20.168 "zone_append": false, 00:10:20.168 "compare": false, 00:10:20.168 "compare_and_write": false, 00:10:20.168 "abort": false, 00:10:20.168 "seek_hole": false, 00:10:20.168 "seek_data": false, 00:10:20.168 "copy": false, 00:10:20.168 "nvme_iov_md": false 00:10:20.168 }, 00:10:20.168 "memory_domains": [ 00:10:20.168 { 00:10:20.168 "dma_device_id": "system", 00:10:20.168 "dma_device_type": 1 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.168 "dma_device_type": 2 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "dma_device_id": "system", 00:10:20.168 "dma_device_type": 1 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.168 "dma_device_type": 2 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "dma_device_id": "system", 00:10:20.168 "dma_device_type": 1 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.168 "dma_device_type": 2 00:10:20.168 } 00:10:20.168 ], 00:10:20.168 "driver_specific": { 00:10:20.168 "raid": { 00:10:20.168 "uuid": "84a1a28c-d51f-4f1e-a492-23766d80127a", 00:10:20.168 "strip_size_kb": 64, 00:10:20.168 "state": "online", 00:10:20.168 "raid_level": "concat", 00:10:20.168 "superblock": true, 00:10:20.168 "num_base_bdevs": 3, 00:10:20.168 "num_base_bdevs_discovered": 3, 00:10:20.168 "num_base_bdevs_operational": 3, 00:10:20.168 "base_bdevs_list": [ 00:10:20.168 { 00:10:20.168 "name": "NewBaseBdev", 00:10:20.168 "uuid": "abf60c26-2a05-416a-93c8-a7755601d028", 00:10:20.168 "is_configured": true, 00:10:20.168 "data_offset": 2048, 00:10:20.168 "data_size": 63488 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "name": "BaseBdev2", 00:10:20.168 "uuid": "abf7170e-c9df-4c57-86e1-5e6f2f1e84c8", 00:10:20.168 "is_configured": true, 00:10:20.168 "data_offset": 2048, 00:10:20.168 "data_size": 63488 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "name": "BaseBdev3", 00:10:20.168 "uuid": "caed26c6-129a-42d8-acdf-f0c8bd88aaf4", 00:10:20.168 "is_configured": true, 00:10:20.169 "data_offset": 2048, 00:10:20.169 "data_size": 63488 00:10:20.169 } 00:10:20.169 ] 00:10:20.169 } 00:10:20.169 } 00:10:20.169 }' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:20.169 BaseBdev2 00:10:20.169 BaseBdev3' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.169 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.427 [2024-11-05 03:21:33.834179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.427 [2024-11-05 03:21:33.834209] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.427 [2024-11-05 03:21:33.834321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.427 [2024-11-05 03:21:33.834419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.427 [2024-11-05 03:21:33.834440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66035 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66035 ']' 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66035 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66035 00:10:20.427 killing process with pid 66035 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66035' 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66035 00:10:20.427 [2024-11-05 03:21:33.875216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.427 03:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66035 00:10:20.685 [2024-11-05 03:21:34.132156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.621 ************************************ 00:10:21.621 END TEST raid_state_function_test_sb 00:10:21.621 ************************************ 00:10:21.621 03:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.621 00:10:21.621 real 0m12.005s 00:10:21.621 user 0m20.184s 00:10:21.621 sys 0m1.576s 00:10:21.621 03:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.621 03:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.621 03:21:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:21.621 03:21:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:21.621 03:21:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.621 03:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.621 ************************************ 00:10:21.621 START TEST raid_superblock_test 00:10:21.621 ************************************ 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66666 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66666 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:21.621 03:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66666 ']' 00:10:21.622 03:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.622 03:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:21.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.622 03:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.622 03:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:21.622 03:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.880 [2024-11-05 03:21:35.267745] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:21.880 [2024-11-05 03:21:35.268156] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66666 ] 00:10:21.880 [2024-11-05 03:21:35.453822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.138 [2024-11-05 03:21:35.574393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.138 [2024-11-05 03:21:35.766093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.138 [2024-11-05 03:21:35.766386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.705 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:22.705 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:22.705 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.706 malloc1 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.706 [2024-11-05 03:21:36.233734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.706 [2024-11-05 03:21:36.234013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.706 [2024-11-05 03:21:36.234101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:22.706 [2024-11-05 03:21:36.234266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.706 [2024-11-05 03:21:36.237124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.706 [2024-11-05 03:21:36.237348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.706 pt1 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.706 malloc2 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.706 [2024-11-05 03:21:36.287910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.706 [2024-11-05 03:21:36.288123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.706 [2024-11-05 03:21:36.288197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:22.706 [2024-11-05 03:21:36.288438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.706 [2024-11-05 03:21:36.291452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.706 [2024-11-05 03:21:36.291602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.706 pt2 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.706 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 malloc3 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 [2024-11-05 03:21:36.351719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.966 [2024-11-05 03:21:36.351795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.966 [2024-11-05 03:21:36.351827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:22.966 [2024-11-05 03:21:36.351841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.966 [2024-11-05 03:21:36.354887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.966 [2024-11-05 03:21:36.354928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.966 pt3 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 [2024-11-05 03:21:36.363826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.966 [2024-11-05 03:21:36.366258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.966 [2024-11-05 03:21:36.366386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.966 [2024-11-05 03:21:36.366592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:22.966 [2024-11-05 03:21:36.366614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:22.966 [2024-11-05 03:21:36.366929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.966 [2024-11-05 03:21:36.367125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:22.966 [2024-11-05 03:21:36.367139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:22.966 [2024-11-05 03:21:36.367292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.966 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.966 "name": "raid_bdev1", 00:10:22.966 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:22.966 "strip_size_kb": 64, 00:10:22.966 "state": "online", 00:10:22.966 "raid_level": "concat", 00:10:22.966 "superblock": true, 00:10:22.966 "num_base_bdevs": 3, 00:10:22.966 "num_base_bdevs_discovered": 3, 00:10:22.966 "num_base_bdevs_operational": 3, 00:10:22.966 "base_bdevs_list": [ 00:10:22.966 { 00:10:22.966 "name": "pt1", 00:10:22.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.966 "is_configured": true, 00:10:22.966 "data_offset": 2048, 00:10:22.966 "data_size": 63488 00:10:22.966 }, 00:10:22.966 { 00:10:22.966 "name": "pt2", 00:10:22.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.966 "is_configured": true, 00:10:22.967 "data_offset": 2048, 00:10:22.967 "data_size": 63488 00:10:22.967 }, 00:10:22.967 { 00:10:22.967 "name": "pt3", 00:10:22.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.967 "is_configured": true, 00:10:22.967 "data_offset": 2048, 00:10:22.967 "data_size": 63488 00:10:22.967 } 00:10:22.967 ] 00:10:22.967 }' 00:10:22.967 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.967 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.535 [2024-11-05 03:21:36.912283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.535 "name": "raid_bdev1", 00:10:23.535 "aliases": [ 00:10:23.535 "a13ab866-5fbf-442e-ac9f-49a04ef16177" 00:10:23.535 ], 00:10:23.535 "product_name": "Raid Volume", 00:10:23.535 "block_size": 512, 00:10:23.535 "num_blocks": 190464, 00:10:23.535 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:23.535 "assigned_rate_limits": { 00:10:23.535 "rw_ios_per_sec": 0, 00:10:23.535 "rw_mbytes_per_sec": 0, 00:10:23.535 "r_mbytes_per_sec": 0, 00:10:23.535 "w_mbytes_per_sec": 0 00:10:23.535 }, 00:10:23.535 "claimed": false, 00:10:23.535 "zoned": false, 00:10:23.535 "supported_io_types": { 00:10:23.535 "read": true, 00:10:23.535 "write": true, 00:10:23.535 "unmap": true, 00:10:23.535 "flush": true, 00:10:23.535 "reset": true, 00:10:23.535 "nvme_admin": false, 00:10:23.535 "nvme_io": false, 00:10:23.535 "nvme_io_md": false, 00:10:23.535 "write_zeroes": true, 00:10:23.535 "zcopy": false, 00:10:23.535 "get_zone_info": false, 00:10:23.535 "zone_management": false, 00:10:23.535 "zone_append": false, 00:10:23.535 "compare": false, 00:10:23.535 "compare_and_write": false, 00:10:23.535 "abort": false, 00:10:23.535 "seek_hole": false, 00:10:23.535 "seek_data": false, 00:10:23.535 "copy": false, 00:10:23.535 "nvme_iov_md": false 00:10:23.535 }, 00:10:23.535 "memory_domains": [ 00:10:23.535 { 00:10:23.535 "dma_device_id": "system", 00:10:23.535 "dma_device_type": 1 00:10:23.535 }, 00:10:23.535 { 00:10:23.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.535 "dma_device_type": 2 00:10:23.535 }, 00:10:23.535 { 00:10:23.535 "dma_device_id": "system", 00:10:23.535 "dma_device_type": 1 00:10:23.535 }, 00:10:23.535 { 00:10:23.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.535 "dma_device_type": 2 00:10:23.535 }, 00:10:23.535 { 00:10:23.535 "dma_device_id": "system", 00:10:23.535 "dma_device_type": 1 00:10:23.535 }, 00:10:23.535 { 00:10:23.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.535 "dma_device_type": 2 00:10:23.535 } 00:10:23.535 ], 00:10:23.535 "driver_specific": { 00:10:23.535 "raid": { 00:10:23.535 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:23.535 "strip_size_kb": 64, 00:10:23.535 "state": "online", 00:10:23.535 "raid_level": "concat", 00:10:23.535 "superblock": true, 00:10:23.535 "num_base_bdevs": 3, 00:10:23.535 "num_base_bdevs_discovered": 3, 00:10:23.535 "num_base_bdevs_operational": 3, 00:10:23.535 "base_bdevs_list": [ 00:10:23.535 { 00:10:23.535 "name": "pt1", 00:10:23.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.535 "is_configured": true, 00:10:23.535 "data_offset": 2048, 00:10:23.535 "data_size": 63488 00:10:23.535 }, 00:10:23.535 { 00:10:23.535 "name": "pt2", 00:10:23.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.535 "is_configured": true, 00:10:23.535 "data_offset": 2048, 00:10:23.535 "data_size": 63488 00:10:23.535 }, 00:10:23.535 { 00:10:23.535 "name": "pt3", 00:10:23.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.535 "is_configured": true, 00:10:23.535 "data_offset": 2048, 00:10:23.535 "data_size": 63488 00:10:23.535 } 00:10:23.535 ] 00:10:23.535 } 00:10:23.535 } 00:10:23.535 }' 00:10:23.535 03:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.536 pt2 00:10:23.536 pt3' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.536 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 [2024-11-05 03:21:37.224405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a13ab866-5fbf-442e-ac9f-49a04ef16177 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a13ab866-5fbf-442e-ac9f-49a04ef16177 ']' 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 [2024-11-05 03:21:37.264014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.795 [2024-11-05 03:21:37.264040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.795 [2024-11-05 03:21:37.264116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.795 [2024-11-05 03:21:37.264189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.795 [2024-11-05 03:21:37.264203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 [2024-11-05 03:21:37.408131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:23.795 [2024-11-05 03:21:37.410775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:23.795 [2024-11-05 03:21:37.410840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:23.795 [2024-11-05 03:21:37.410901] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:23.795 [2024-11-05 03:21:37.410984] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:23.795 [2024-11-05 03:21:37.411015] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:23.795 [2024-11-05 03:21:37.411040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.795 [2024-11-05 03:21:37.411052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:23.795 request: 00:10:23.795 { 00:10:23.796 "name": "raid_bdev1", 00:10:23.796 "raid_level": "concat", 00:10:23.796 "base_bdevs": [ 00:10:23.796 "malloc1", 00:10:23.796 "malloc2", 00:10:23.796 "malloc3" 00:10:23.796 ], 00:10:23.796 "strip_size_kb": 64, 00:10:23.796 "superblock": false, 00:10:23.796 "method": "bdev_raid_create", 00:10:23.796 "req_id": 1 00:10:23.796 } 00:10:23.796 Got JSON-RPC error response 00:10:23.796 response: 00:10:23.796 { 00:10:23.796 "code": -17, 00:10:23.796 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:23.796 } 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.796 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.055 [2024-11-05 03:21:37.476083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.055 [2024-11-05 03:21:37.476291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.055 [2024-11-05 03:21:37.476410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.055 [2024-11-05 03:21:37.476650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.055 [2024-11-05 03:21:37.479786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.055 [2024-11-05 03:21:37.479978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.055 [2024-11-05 03:21:37.480180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:24.055 [2024-11-05 03:21:37.480410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.055 pt1 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.055 "name": "raid_bdev1", 00:10:24.055 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:24.055 "strip_size_kb": 64, 00:10:24.055 "state": "configuring", 00:10:24.055 "raid_level": "concat", 00:10:24.055 "superblock": true, 00:10:24.055 "num_base_bdevs": 3, 00:10:24.055 "num_base_bdevs_discovered": 1, 00:10:24.055 "num_base_bdevs_operational": 3, 00:10:24.055 "base_bdevs_list": [ 00:10:24.055 { 00:10:24.055 "name": "pt1", 00:10:24.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.055 "is_configured": true, 00:10:24.055 "data_offset": 2048, 00:10:24.055 "data_size": 63488 00:10:24.055 }, 00:10:24.055 { 00:10:24.055 "name": null, 00:10:24.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.055 "is_configured": false, 00:10:24.055 "data_offset": 2048, 00:10:24.055 "data_size": 63488 00:10:24.055 }, 00:10:24.055 { 00:10:24.055 "name": null, 00:10:24.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.055 "is_configured": false, 00:10:24.055 "data_offset": 2048, 00:10:24.055 "data_size": 63488 00:10:24.055 } 00:10:24.055 ] 00:10:24.055 }' 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.055 03:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.624 [2024-11-05 03:21:38.020481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.624 [2024-11-05 03:21:38.020566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.624 [2024-11-05 03:21:38.020598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:24.624 [2024-11-05 03:21:38.020613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.624 [2024-11-05 03:21:38.021191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.624 [2024-11-05 03:21:38.021221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.624 [2024-11-05 03:21:38.021377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.624 [2024-11-05 03:21:38.021410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.624 pt2 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.624 [2024-11-05 03:21:38.028476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.624 "name": "raid_bdev1", 00:10:24.624 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:24.624 "strip_size_kb": 64, 00:10:24.624 "state": "configuring", 00:10:24.624 "raid_level": "concat", 00:10:24.624 "superblock": true, 00:10:24.624 "num_base_bdevs": 3, 00:10:24.624 "num_base_bdevs_discovered": 1, 00:10:24.624 "num_base_bdevs_operational": 3, 00:10:24.624 "base_bdevs_list": [ 00:10:24.624 { 00:10:24.624 "name": "pt1", 00:10:24.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.624 "is_configured": true, 00:10:24.624 "data_offset": 2048, 00:10:24.624 "data_size": 63488 00:10:24.624 }, 00:10:24.624 { 00:10:24.624 "name": null, 00:10:24.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.624 "is_configured": false, 00:10:24.624 "data_offset": 0, 00:10:24.624 "data_size": 63488 00:10:24.624 }, 00:10:24.624 { 00:10:24.624 "name": null, 00:10:24.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.624 "is_configured": false, 00:10:24.624 "data_offset": 2048, 00:10:24.624 "data_size": 63488 00:10:24.624 } 00:10:24.624 ] 00:10:24.624 }' 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.624 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 [2024-11-05 03:21:38.544652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.192 [2024-11-05 03:21:38.544793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.192 [2024-11-05 03:21:38.544817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:25.192 [2024-11-05 03:21:38.544834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.192 [2024-11-05 03:21:38.545444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.192 [2024-11-05 03:21:38.545483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.192 [2024-11-05 03:21:38.545577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.192 [2024-11-05 03:21:38.545613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.192 pt2 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 [2024-11-05 03:21:38.556630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.192 [2024-11-05 03:21:38.556718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.192 [2024-11-05 03:21:38.556769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.192 [2024-11-05 03:21:38.556782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.192 [2024-11-05 03:21:38.557202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.192 [2024-11-05 03:21:38.557239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.192 [2024-11-05 03:21:38.557364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:25.192 [2024-11-05 03:21:38.557398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.192 [2024-11-05 03:21:38.557541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.192 [2024-11-05 03:21:38.557567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.192 [2024-11-05 03:21:38.557906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:25.192 [2024-11-05 03:21:38.558106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.192 [2024-11-05 03:21:38.558127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:25.192 [2024-11-05 03:21:38.558299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.192 pt3 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.192 "name": "raid_bdev1", 00:10:25.192 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:25.192 "strip_size_kb": 64, 00:10:25.192 "state": "online", 00:10:25.192 "raid_level": "concat", 00:10:25.192 "superblock": true, 00:10:25.192 "num_base_bdevs": 3, 00:10:25.192 "num_base_bdevs_discovered": 3, 00:10:25.192 "num_base_bdevs_operational": 3, 00:10:25.192 "base_bdevs_list": [ 00:10:25.192 { 00:10:25.192 "name": "pt1", 00:10:25.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.192 "is_configured": true, 00:10:25.192 "data_offset": 2048, 00:10:25.192 "data_size": 63488 00:10:25.192 }, 00:10:25.192 { 00:10:25.192 "name": "pt2", 00:10:25.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.192 "is_configured": true, 00:10:25.192 "data_offset": 2048, 00:10:25.192 "data_size": 63488 00:10:25.192 }, 00:10:25.192 { 00:10:25.192 "name": "pt3", 00:10:25.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.192 "is_configured": true, 00:10:25.192 "data_offset": 2048, 00:10:25.192 "data_size": 63488 00:10:25.192 } 00:10:25.192 ] 00:10:25.192 }' 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.192 03:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.451 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.451 [2024-11-05 03:21:39.069212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.709 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.709 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.709 "name": "raid_bdev1", 00:10:25.709 "aliases": [ 00:10:25.709 "a13ab866-5fbf-442e-ac9f-49a04ef16177" 00:10:25.709 ], 00:10:25.709 "product_name": "Raid Volume", 00:10:25.709 "block_size": 512, 00:10:25.709 "num_blocks": 190464, 00:10:25.709 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:25.709 "assigned_rate_limits": { 00:10:25.709 "rw_ios_per_sec": 0, 00:10:25.709 "rw_mbytes_per_sec": 0, 00:10:25.709 "r_mbytes_per_sec": 0, 00:10:25.709 "w_mbytes_per_sec": 0 00:10:25.709 }, 00:10:25.709 "claimed": false, 00:10:25.709 "zoned": false, 00:10:25.709 "supported_io_types": { 00:10:25.709 "read": true, 00:10:25.709 "write": true, 00:10:25.709 "unmap": true, 00:10:25.709 "flush": true, 00:10:25.709 "reset": true, 00:10:25.709 "nvme_admin": false, 00:10:25.709 "nvme_io": false, 00:10:25.709 "nvme_io_md": false, 00:10:25.709 "write_zeroes": true, 00:10:25.709 "zcopy": false, 00:10:25.709 "get_zone_info": false, 00:10:25.709 "zone_management": false, 00:10:25.709 "zone_append": false, 00:10:25.709 "compare": false, 00:10:25.709 "compare_and_write": false, 00:10:25.709 "abort": false, 00:10:25.709 "seek_hole": false, 00:10:25.709 "seek_data": false, 00:10:25.709 "copy": false, 00:10:25.709 "nvme_iov_md": false 00:10:25.709 }, 00:10:25.709 "memory_domains": [ 00:10:25.709 { 00:10:25.709 "dma_device_id": "system", 00:10:25.709 "dma_device_type": 1 00:10:25.709 }, 00:10:25.709 { 00:10:25.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.709 "dma_device_type": 2 00:10:25.709 }, 00:10:25.709 { 00:10:25.709 "dma_device_id": "system", 00:10:25.709 "dma_device_type": 1 00:10:25.709 }, 00:10:25.709 { 00:10:25.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.710 "dma_device_type": 2 00:10:25.710 }, 00:10:25.710 { 00:10:25.710 "dma_device_id": "system", 00:10:25.710 "dma_device_type": 1 00:10:25.710 }, 00:10:25.710 { 00:10:25.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.710 "dma_device_type": 2 00:10:25.710 } 00:10:25.710 ], 00:10:25.710 "driver_specific": { 00:10:25.710 "raid": { 00:10:25.710 "uuid": "a13ab866-5fbf-442e-ac9f-49a04ef16177", 00:10:25.710 "strip_size_kb": 64, 00:10:25.710 "state": "online", 00:10:25.710 "raid_level": "concat", 00:10:25.710 "superblock": true, 00:10:25.710 "num_base_bdevs": 3, 00:10:25.710 "num_base_bdevs_discovered": 3, 00:10:25.710 "num_base_bdevs_operational": 3, 00:10:25.710 "base_bdevs_list": [ 00:10:25.710 { 00:10:25.710 "name": "pt1", 00:10:25.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.710 "is_configured": true, 00:10:25.710 "data_offset": 2048, 00:10:25.710 "data_size": 63488 00:10:25.710 }, 00:10:25.710 { 00:10:25.710 "name": "pt2", 00:10:25.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.710 "is_configured": true, 00:10:25.710 "data_offset": 2048, 00:10:25.710 "data_size": 63488 00:10:25.710 }, 00:10:25.710 { 00:10:25.710 "name": "pt3", 00:10:25.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.710 "is_configured": true, 00:10:25.710 "data_offset": 2048, 00:10:25.710 "data_size": 63488 00:10:25.710 } 00:10:25.710 ] 00:10:25.710 } 00:10:25.710 } 00:10:25.710 }' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.710 pt2 00:10:25.710 pt3' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.710 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.969 [2024-11-05 03:21:39.401195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a13ab866-5fbf-442e-ac9f-49a04ef16177 '!=' a13ab866-5fbf-442e-ac9f-49a04ef16177 ']' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66666 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66666 ']' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66666 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66666 00:10:25.969 killing process with pid 66666 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66666' 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66666 00:10:25.969 [2024-11-05 03:21:39.481101] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.969 03:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66666 00:10:25.969 [2024-11-05 03:21:39.481208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.969 [2024-11-05 03:21:39.481294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.969 [2024-11-05 03:21:39.481358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:26.227 [2024-11-05 03:21:39.721010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.164 03:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:27.164 00:10:27.164 real 0m5.566s 00:10:27.164 user 0m8.411s 00:10:27.164 sys 0m0.833s 00:10:27.164 ************************************ 00:10:27.164 END TEST raid_superblock_test 00:10:27.164 ************************************ 00:10:27.164 03:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.164 03:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.164 03:21:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:27.164 03:21:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:27.164 03:21:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.164 03:21:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.164 ************************************ 00:10:27.164 START TEST raid_read_error_test 00:10:27.164 ************************************ 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mLrtqhV4qc 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66925 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66925 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 66925 ']' 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.164 03:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.424 [2024-11-05 03:21:40.911199] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:27.424 [2024-11-05 03:21:40.911488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66925 ] 00:10:27.683 [2024-11-05 03:21:41.101094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.683 [2024-11-05 03:21:41.231554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.941 [2024-11-05 03:21:41.428104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.941 [2024-11-05 03:21:41.428344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 BaseBdev1_malloc 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 true 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 [2024-11-05 03:21:41.996687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.509 [2024-11-05 03:21:41.996787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.509 [2024-11-05 03:21:41.996830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:28.509 [2024-11-05 03:21:41.996847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.509 [2024-11-05 03:21:41.999826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.509 [2024-11-05 03:21:41.999886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.509 BaseBdev1 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 BaseBdev2_malloc 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 true 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 [2024-11-05 03:21:42.060550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:28.509 [2024-11-05 03:21:42.060829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.509 [2024-11-05 03:21:42.060865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:28.509 [2024-11-05 03:21:42.060883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.509 [2024-11-05 03:21:42.063733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.509 [2024-11-05 03:21:42.063795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.509 BaseBdev2 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 BaseBdev3_malloc 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 true 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 [2024-11-05 03:21:42.133505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.509 [2024-11-05 03:21:42.133585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.509 [2024-11-05 03:21:42.133610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:28.509 [2024-11-05 03:21:42.133626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.509 [2024-11-05 03:21:42.136470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.509 [2024-11-05 03:21:42.136522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.509 BaseBdev3 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.509 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.509 [2024-11-05 03:21:42.141595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.509 [2024-11-05 03:21:42.144050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.509 [2024-11-05 03:21:42.144149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.509 [2024-11-05 03:21:42.144427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.509 [2024-11-05 03:21:42.144450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:28.509 [2024-11-05 03:21:42.144737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:28.509 [2024-11-05 03:21:42.144927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.509 [2024-11-05 03:21:42.144957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:28.509 [2024-11-05 03:21:42.145121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.768 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.768 "name": "raid_bdev1", 00:10:28.768 "uuid": "68e7d5dd-98a1-40a8-b55f-204d52f197df", 00:10:28.768 "strip_size_kb": 64, 00:10:28.768 "state": "online", 00:10:28.768 "raid_level": "concat", 00:10:28.768 "superblock": true, 00:10:28.768 "num_base_bdevs": 3, 00:10:28.768 "num_base_bdevs_discovered": 3, 00:10:28.768 "num_base_bdevs_operational": 3, 00:10:28.768 "base_bdevs_list": [ 00:10:28.768 { 00:10:28.768 "name": "BaseBdev1", 00:10:28.768 "uuid": "0de3ee13-2b0a-5e00-baea-a0def3053864", 00:10:28.768 "is_configured": true, 00:10:28.768 "data_offset": 2048, 00:10:28.768 "data_size": 63488 00:10:28.769 }, 00:10:28.769 { 00:10:28.769 "name": "BaseBdev2", 00:10:28.769 "uuid": "113da3bb-97bc-5c7b-9cd4-8a8545d8ce30", 00:10:28.769 "is_configured": true, 00:10:28.769 "data_offset": 2048, 00:10:28.769 "data_size": 63488 00:10:28.769 }, 00:10:28.769 { 00:10:28.769 "name": "BaseBdev3", 00:10:28.769 "uuid": "3e89049e-998e-52ce-ac7b-e208dc9623f7", 00:10:28.769 "is_configured": true, 00:10:28.769 "data_offset": 2048, 00:10:28.769 "data_size": 63488 00:10:28.769 } 00:10:28.769 ] 00:10:28.769 }' 00:10:28.769 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.769 03:21:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.027 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.027 03:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.286 [2024-11-05 03:21:42.775078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.223 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.224 "name": "raid_bdev1", 00:10:30.224 "uuid": "68e7d5dd-98a1-40a8-b55f-204d52f197df", 00:10:30.224 "strip_size_kb": 64, 00:10:30.224 "state": "online", 00:10:30.224 "raid_level": "concat", 00:10:30.224 "superblock": true, 00:10:30.224 "num_base_bdevs": 3, 00:10:30.224 "num_base_bdevs_discovered": 3, 00:10:30.224 "num_base_bdevs_operational": 3, 00:10:30.224 "base_bdevs_list": [ 00:10:30.224 { 00:10:30.224 "name": "BaseBdev1", 00:10:30.224 "uuid": "0de3ee13-2b0a-5e00-baea-a0def3053864", 00:10:30.224 "is_configured": true, 00:10:30.224 "data_offset": 2048, 00:10:30.224 "data_size": 63488 00:10:30.224 }, 00:10:30.224 { 00:10:30.224 "name": "BaseBdev2", 00:10:30.224 "uuid": "113da3bb-97bc-5c7b-9cd4-8a8545d8ce30", 00:10:30.224 "is_configured": true, 00:10:30.224 "data_offset": 2048, 00:10:30.224 "data_size": 63488 00:10:30.224 }, 00:10:30.224 { 00:10:30.224 "name": "BaseBdev3", 00:10:30.224 "uuid": "3e89049e-998e-52ce-ac7b-e208dc9623f7", 00:10:30.224 "is_configured": true, 00:10:30.224 "data_offset": 2048, 00:10:30.224 "data_size": 63488 00:10:30.224 } 00:10:30.224 ] 00:10:30.224 }' 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.224 03:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.791 [2024-11-05 03:21:44.205379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.791 [2024-11-05 03:21:44.205428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.791 [2024-11-05 03:21:44.208847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.791 [2024-11-05 03:21:44.208898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.791 [2024-11-05 03:21:44.208944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.791 [2024-11-05 03:21:44.208960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:30.791 { 00:10:30.791 "results": [ 00:10:30.791 { 00:10:30.791 "job": "raid_bdev1", 00:10:30.791 "core_mask": "0x1", 00:10:30.791 "workload": "randrw", 00:10:30.791 "percentage": 50, 00:10:30.791 "status": "finished", 00:10:30.791 "queue_depth": 1, 00:10:30.791 "io_size": 131072, 00:10:30.791 "runtime": 1.428077, 00:10:30.791 "iops": 11666.737857972645, 00:10:30.791 "mibps": 1458.3422322465806, 00:10:30.791 "io_failed": 1, 00:10:30.791 "io_timeout": 0, 00:10:30.791 "avg_latency_us": 119.4941827348021, 00:10:30.791 "min_latency_us": 35.14181818181818, 00:10:30.791 "max_latency_us": 1899.0545454545454 00:10:30.791 } 00:10:30.791 ], 00:10:30.791 "core_count": 1 00:10:30.791 } 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66925 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 66925 ']' 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 66925 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66925 00:10:30.791 killing process with pid 66925 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66925' 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 66925 00:10:30.791 [2024-11-05 03:21:44.247837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.791 03:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 66925 00:10:31.048 [2024-11-05 03:21:44.442356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.983 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mLrtqhV4qc 00:10:31.983 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:31.984 00:10:31.984 real 0m4.629s 00:10:31.984 user 0m5.811s 00:10:31.984 sys 0m0.606s 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.984 ************************************ 00:10:31.984 END TEST raid_read_error_test 00:10:31.984 ************************************ 00:10:31.984 03:21:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 03:21:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:31.984 03:21:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:31.984 03:21:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.984 03:21:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 ************************************ 00:10:31.984 START TEST raid_write_error_test 00:10:31.984 ************************************ 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1zQZ1qotlK 00:10:31.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67069 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67069 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67069 ']' 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.984 03:21:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 [2024-11-05 03:21:45.586905] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:31.984 [2024-11-05 03:21:45.587117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67069 ] 00:10:32.243 [2024-11-05 03:21:45.766571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.501 [2024-11-05 03:21:45.882379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.501 [2024-11-05 03:21:46.071783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.501 [2024-11-05 03:21:46.071853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 BaseBdev1_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 true 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 [2024-11-05 03:21:46.555148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:33.069 [2024-11-05 03:21:46.555231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.069 [2024-11-05 03:21:46.555260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:33.069 [2024-11-05 03:21:46.555279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.069 [2024-11-05 03:21:46.558158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.069 [2024-11-05 03:21:46.558225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:33.069 BaseBdev1 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 BaseBdev2_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 true 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 [2024-11-05 03:21:46.611359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:33.069 [2024-11-05 03:21:46.611454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.069 [2024-11-05 03:21:46.611482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:33.069 [2024-11-05 03:21:46.611499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.069 [2024-11-05 03:21:46.614438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.069 [2024-11-05 03:21:46.614486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:33.069 BaseBdev2 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 BaseBdev3_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 true 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.069 [2024-11-05 03:21:46.675682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:33.069 [2024-11-05 03:21:46.675765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.069 [2024-11-05 03:21:46.675791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:33.069 [2024-11-05 03:21:46.675809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.069 [2024-11-05 03:21:46.678761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.069 [2024-11-05 03:21:46.678809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:33.069 BaseBdev3 00:10:33.069 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.070 [2024-11-05 03:21:46.683759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.070 [2024-11-05 03:21:46.686434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.070 [2024-11-05 03:21:46.686697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.070 [2024-11-05 03:21:46.687137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.070 [2024-11-05 03:21:46.687296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:33.070 [2024-11-05 03:21:46.687699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:33.070 [2024-11-05 03:21:46.688060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.070 [2024-11-05 03:21:46.688091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:33.070 [2024-11-05 03:21:46.688499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.070 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.332 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.332 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.332 "name": "raid_bdev1", 00:10:33.332 "uuid": "185f2b0e-f486-42c1-a909-9ebd77d49f48", 00:10:33.332 "strip_size_kb": 64, 00:10:33.332 "state": "online", 00:10:33.332 "raid_level": "concat", 00:10:33.332 "superblock": true, 00:10:33.332 "num_base_bdevs": 3, 00:10:33.332 "num_base_bdevs_discovered": 3, 00:10:33.332 "num_base_bdevs_operational": 3, 00:10:33.332 "base_bdevs_list": [ 00:10:33.332 { 00:10:33.332 "name": "BaseBdev1", 00:10:33.332 "uuid": "3e303c8d-c67e-5fdd-90d7-9495380ca251", 00:10:33.332 "is_configured": true, 00:10:33.332 "data_offset": 2048, 00:10:33.332 "data_size": 63488 00:10:33.332 }, 00:10:33.332 { 00:10:33.332 "name": "BaseBdev2", 00:10:33.332 "uuid": "c6f4a310-76c0-578d-b1a5-ca7dd324e880", 00:10:33.332 "is_configured": true, 00:10:33.332 "data_offset": 2048, 00:10:33.332 "data_size": 63488 00:10:33.332 }, 00:10:33.332 { 00:10:33.332 "name": "BaseBdev3", 00:10:33.332 "uuid": "2d8c453a-ad79-53b6-b3d1-0548b309bdcc", 00:10:33.332 "is_configured": true, 00:10:33.332 "data_offset": 2048, 00:10:33.332 "data_size": 63488 00:10:33.332 } 00:10:33.332 ] 00:10:33.332 }' 00:10:33.332 03:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.332 03:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.592 03:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:33.592 03:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:33.851 [2024-11-05 03:21:47.317804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.789 "name": "raid_bdev1", 00:10:34.789 "uuid": "185f2b0e-f486-42c1-a909-9ebd77d49f48", 00:10:34.789 "strip_size_kb": 64, 00:10:34.789 "state": "online", 00:10:34.789 "raid_level": "concat", 00:10:34.789 "superblock": true, 00:10:34.789 "num_base_bdevs": 3, 00:10:34.789 "num_base_bdevs_discovered": 3, 00:10:34.789 "num_base_bdevs_operational": 3, 00:10:34.789 "base_bdevs_list": [ 00:10:34.789 { 00:10:34.789 "name": "BaseBdev1", 00:10:34.789 "uuid": "3e303c8d-c67e-5fdd-90d7-9495380ca251", 00:10:34.789 "is_configured": true, 00:10:34.789 "data_offset": 2048, 00:10:34.789 "data_size": 63488 00:10:34.789 }, 00:10:34.789 { 00:10:34.789 "name": "BaseBdev2", 00:10:34.789 "uuid": "c6f4a310-76c0-578d-b1a5-ca7dd324e880", 00:10:34.789 "is_configured": true, 00:10:34.789 "data_offset": 2048, 00:10:34.789 "data_size": 63488 00:10:34.789 }, 00:10:34.789 { 00:10:34.789 "name": "BaseBdev3", 00:10:34.789 "uuid": "2d8c453a-ad79-53b6-b3d1-0548b309bdcc", 00:10:34.789 "is_configured": true, 00:10:34.789 "data_offset": 2048, 00:10:34.789 "data_size": 63488 00:10:34.789 } 00:10:34.789 ] 00:10:34.789 }' 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.789 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.356 [2024-11-05 03:21:48.765241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.356 [2024-11-05 03:21:48.765275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.356 [2024-11-05 03:21:48.768552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.356 [2024-11-05 03:21:48.768608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.356 [2024-11-05 03:21:48.768661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.356 [2024-11-05 03:21:48.768707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:35.356 { 00:10:35.356 "results": [ 00:10:35.356 { 00:10:35.356 "job": "raid_bdev1", 00:10:35.356 "core_mask": "0x1", 00:10:35.356 "workload": "randrw", 00:10:35.356 "percentage": 50, 00:10:35.356 "status": "finished", 00:10:35.356 "queue_depth": 1, 00:10:35.356 "io_size": 131072, 00:10:35.356 "runtime": 1.44493, 00:10:35.356 "iops": 11316.811195005987, 00:10:35.356 "mibps": 1414.6013993757483, 00:10:35.356 "io_failed": 1, 00:10:35.356 "io_timeout": 0, 00:10:35.356 "avg_latency_us": 122.94156535081136, 00:10:35.356 "min_latency_us": 38.63272727272727, 00:10:35.356 "max_latency_us": 1765.0036363636364 00:10:35.356 } 00:10:35.356 ], 00:10:35.356 "core_count": 1 00:10:35.356 } 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67069 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67069 ']' 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67069 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67069 00:10:35.356 killing process with pid 67069 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67069' 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67069 00:10:35.356 [2024-11-05 03:21:48.803274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.356 03:21:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67069 00:10:35.614 [2024-11-05 03:21:48.993329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.549 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1zQZ1qotlK 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:10:36.550 00:10:36.550 real 0m4.523s 00:10:36.550 user 0m5.613s 00:10:36.550 sys 0m0.578s 00:10:36.550 ************************************ 00:10:36.550 END TEST raid_write_error_test 00:10:36.550 ************************************ 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.550 03:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.550 03:21:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:36.550 03:21:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:36.550 03:21:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:36.550 03:21:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.550 03:21:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.550 ************************************ 00:10:36.550 START TEST raid_state_function_test 00:10:36.550 ************************************ 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.550 Process raid pid: 67215 00:10:36.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67215 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67215' 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67215 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67215 ']' 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.550 03:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.550 [2024-11-05 03:21:50.132032] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:36.550 [2024-11-05 03:21:50.132192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.810 [2024-11-05 03:21:50.306476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.810 [2024-11-05 03:21:50.425418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.068 [2024-11-05 03:21:50.620170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.068 [2024-11-05 03:21:50.620204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.635 [2024-11-05 03:21:51.054390] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.635 [2024-11-05 03:21:51.054626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.635 [2024-11-05 03:21:51.054655] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.635 [2024-11-05 03:21:51.054675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.635 [2024-11-05 03:21:51.054687] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.635 [2024-11-05 03:21:51.054701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.635 "name": "Existed_Raid", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.635 "strip_size_kb": 0, 00:10:37.635 "state": "configuring", 00:10:37.635 "raid_level": "raid1", 00:10:37.635 "superblock": false, 00:10:37.635 "num_base_bdevs": 3, 00:10:37.635 "num_base_bdevs_discovered": 0, 00:10:37.635 "num_base_bdevs_operational": 3, 00:10:37.635 "base_bdevs_list": [ 00:10:37.635 { 00:10:37.635 "name": "BaseBdev1", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.635 "is_configured": false, 00:10:37.635 "data_offset": 0, 00:10:37.635 "data_size": 0 00:10:37.635 }, 00:10:37.635 { 00:10:37.635 "name": "BaseBdev2", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.635 "is_configured": false, 00:10:37.635 "data_offset": 0, 00:10:37.635 "data_size": 0 00:10:37.635 }, 00:10:37.635 { 00:10:37.635 "name": "BaseBdev3", 00:10:37.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.635 "is_configured": false, 00:10:37.635 "data_offset": 0, 00:10:37.635 "data_size": 0 00:10:37.635 } 00:10:37.635 ] 00:10:37.635 }' 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.635 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.203 [2024-11-05 03:21:51.586483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.203 [2024-11-05 03:21:51.586526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.203 [2024-11-05 03:21:51.594433] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.203 [2024-11-05 03:21:51.594505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.203 [2024-11-05 03:21:51.594521] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.203 [2024-11-05 03:21:51.594537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.203 [2024-11-05 03:21:51.594546] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.203 [2024-11-05 03:21:51.594560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.203 [2024-11-05 03:21:51.639742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.203 BaseBdev1 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.203 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.204 [ 00:10:38.204 { 00:10:38.204 "name": "BaseBdev1", 00:10:38.204 "aliases": [ 00:10:38.204 "b8199216-f522-498a-a6a3-0f4468722d0f" 00:10:38.204 ], 00:10:38.204 "product_name": "Malloc disk", 00:10:38.204 "block_size": 512, 00:10:38.204 "num_blocks": 65536, 00:10:38.204 "uuid": "b8199216-f522-498a-a6a3-0f4468722d0f", 00:10:38.204 "assigned_rate_limits": { 00:10:38.204 "rw_ios_per_sec": 0, 00:10:38.204 "rw_mbytes_per_sec": 0, 00:10:38.204 "r_mbytes_per_sec": 0, 00:10:38.204 "w_mbytes_per_sec": 0 00:10:38.204 }, 00:10:38.204 "claimed": true, 00:10:38.204 "claim_type": "exclusive_write", 00:10:38.204 "zoned": false, 00:10:38.204 "supported_io_types": { 00:10:38.204 "read": true, 00:10:38.204 "write": true, 00:10:38.204 "unmap": true, 00:10:38.204 "flush": true, 00:10:38.204 "reset": true, 00:10:38.204 "nvme_admin": false, 00:10:38.204 "nvme_io": false, 00:10:38.204 "nvme_io_md": false, 00:10:38.204 "write_zeroes": true, 00:10:38.204 "zcopy": true, 00:10:38.204 "get_zone_info": false, 00:10:38.204 "zone_management": false, 00:10:38.204 "zone_append": false, 00:10:38.204 "compare": false, 00:10:38.204 "compare_and_write": false, 00:10:38.204 "abort": true, 00:10:38.204 "seek_hole": false, 00:10:38.204 "seek_data": false, 00:10:38.204 "copy": true, 00:10:38.204 "nvme_iov_md": false 00:10:38.204 }, 00:10:38.204 "memory_domains": [ 00:10:38.204 { 00:10:38.204 "dma_device_id": "system", 00:10:38.204 "dma_device_type": 1 00:10:38.204 }, 00:10:38.204 { 00:10:38.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.204 "dma_device_type": 2 00:10:38.204 } 00:10:38.204 ], 00:10:38.204 "driver_specific": {} 00:10:38.204 } 00:10:38.204 ] 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.204 "name": "Existed_Raid", 00:10:38.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.204 "strip_size_kb": 0, 00:10:38.204 "state": "configuring", 00:10:38.204 "raid_level": "raid1", 00:10:38.204 "superblock": false, 00:10:38.204 "num_base_bdevs": 3, 00:10:38.204 "num_base_bdevs_discovered": 1, 00:10:38.204 "num_base_bdevs_operational": 3, 00:10:38.204 "base_bdevs_list": [ 00:10:38.204 { 00:10:38.204 "name": "BaseBdev1", 00:10:38.204 "uuid": "b8199216-f522-498a-a6a3-0f4468722d0f", 00:10:38.204 "is_configured": true, 00:10:38.204 "data_offset": 0, 00:10:38.204 "data_size": 65536 00:10:38.204 }, 00:10:38.204 { 00:10:38.204 "name": "BaseBdev2", 00:10:38.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.204 "is_configured": false, 00:10:38.204 "data_offset": 0, 00:10:38.204 "data_size": 0 00:10:38.204 }, 00:10:38.204 { 00:10:38.204 "name": "BaseBdev3", 00:10:38.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.204 "is_configured": false, 00:10:38.204 "data_offset": 0, 00:10:38.204 "data_size": 0 00:10:38.204 } 00:10:38.204 ] 00:10:38.204 }' 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.204 03:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.772 [2024-11-05 03:21:52.175902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.772 [2024-11-05 03:21:52.175962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.772 [2024-11-05 03:21:52.183933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.772 [2024-11-05 03:21:52.186397] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.772 [2024-11-05 03:21:52.186452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.772 [2024-11-05 03:21:52.186469] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.772 [2024-11-05 03:21:52.186485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.772 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.773 "name": "Existed_Raid", 00:10:38.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.773 "strip_size_kb": 0, 00:10:38.773 "state": "configuring", 00:10:38.773 "raid_level": "raid1", 00:10:38.773 "superblock": false, 00:10:38.773 "num_base_bdevs": 3, 00:10:38.773 "num_base_bdevs_discovered": 1, 00:10:38.773 "num_base_bdevs_operational": 3, 00:10:38.773 "base_bdevs_list": [ 00:10:38.773 { 00:10:38.773 "name": "BaseBdev1", 00:10:38.773 "uuid": "b8199216-f522-498a-a6a3-0f4468722d0f", 00:10:38.773 "is_configured": true, 00:10:38.773 "data_offset": 0, 00:10:38.773 "data_size": 65536 00:10:38.773 }, 00:10:38.773 { 00:10:38.773 "name": "BaseBdev2", 00:10:38.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.773 "is_configured": false, 00:10:38.773 "data_offset": 0, 00:10:38.773 "data_size": 0 00:10:38.773 }, 00:10:38.773 { 00:10:38.773 "name": "BaseBdev3", 00:10:38.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.773 "is_configured": false, 00:10:38.773 "data_offset": 0, 00:10:38.773 "data_size": 0 00:10:38.773 } 00:10:38.773 ] 00:10:38.773 }' 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.773 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 [2024-11-05 03:21:52.727046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.341 BaseBdev2 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 [ 00:10:39.341 { 00:10:39.341 "name": "BaseBdev2", 00:10:39.341 "aliases": [ 00:10:39.341 "1cbd2fbf-3828-473d-ab86-7a5c47716fce" 00:10:39.341 ], 00:10:39.341 "product_name": "Malloc disk", 00:10:39.341 "block_size": 512, 00:10:39.341 "num_blocks": 65536, 00:10:39.341 "uuid": "1cbd2fbf-3828-473d-ab86-7a5c47716fce", 00:10:39.341 "assigned_rate_limits": { 00:10:39.341 "rw_ios_per_sec": 0, 00:10:39.341 "rw_mbytes_per_sec": 0, 00:10:39.341 "r_mbytes_per_sec": 0, 00:10:39.341 "w_mbytes_per_sec": 0 00:10:39.341 }, 00:10:39.341 "claimed": true, 00:10:39.341 "claim_type": "exclusive_write", 00:10:39.341 "zoned": false, 00:10:39.341 "supported_io_types": { 00:10:39.341 "read": true, 00:10:39.341 "write": true, 00:10:39.341 "unmap": true, 00:10:39.341 "flush": true, 00:10:39.341 "reset": true, 00:10:39.341 "nvme_admin": false, 00:10:39.341 "nvme_io": false, 00:10:39.341 "nvme_io_md": false, 00:10:39.341 "write_zeroes": true, 00:10:39.341 "zcopy": true, 00:10:39.341 "get_zone_info": false, 00:10:39.341 "zone_management": false, 00:10:39.341 "zone_append": false, 00:10:39.341 "compare": false, 00:10:39.341 "compare_and_write": false, 00:10:39.341 "abort": true, 00:10:39.341 "seek_hole": false, 00:10:39.341 "seek_data": false, 00:10:39.341 "copy": true, 00:10:39.341 "nvme_iov_md": false 00:10:39.341 }, 00:10:39.341 "memory_domains": [ 00:10:39.341 { 00:10:39.341 "dma_device_id": "system", 00:10:39.341 "dma_device_type": 1 00:10:39.341 }, 00:10:39.341 { 00:10:39.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.341 "dma_device_type": 2 00:10:39.341 } 00:10:39.341 ], 00:10:39.341 "driver_specific": {} 00:10:39.341 } 00:10:39.341 ] 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.341 "name": "Existed_Raid", 00:10:39.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.341 "strip_size_kb": 0, 00:10:39.341 "state": "configuring", 00:10:39.341 "raid_level": "raid1", 00:10:39.341 "superblock": false, 00:10:39.341 "num_base_bdevs": 3, 00:10:39.341 "num_base_bdevs_discovered": 2, 00:10:39.341 "num_base_bdevs_operational": 3, 00:10:39.341 "base_bdevs_list": [ 00:10:39.341 { 00:10:39.341 "name": "BaseBdev1", 00:10:39.341 "uuid": "b8199216-f522-498a-a6a3-0f4468722d0f", 00:10:39.341 "is_configured": true, 00:10:39.341 "data_offset": 0, 00:10:39.341 "data_size": 65536 00:10:39.341 }, 00:10:39.341 { 00:10:39.341 "name": "BaseBdev2", 00:10:39.341 "uuid": "1cbd2fbf-3828-473d-ab86-7a5c47716fce", 00:10:39.341 "is_configured": true, 00:10:39.341 "data_offset": 0, 00:10:39.341 "data_size": 65536 00:10:39.341 }, 00:10:39.341 { 00:10:39.341 "name": "BaseBdev3", 00:10:39.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.341 "is_configured": false, 00:10:39.341 "data_offset": 0, 00:10:39.341 "data_size": 0 00:10:39.341 } 00:10:39.341 ] 00:10:39.341 }' 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.341 03:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.909 [2024-11-05 03:21:53.308703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.909 [2024-11-05 03:21:53.308772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.909 [2024-11-05 03:21:53.308790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:39.909 [2024-11-05 03:21:53.309102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:39.909 [2024-11-05 03:21:53.309303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.909 [2024-11-05 03:21:53.309334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.909 [2024-11-05 03:21:53.309698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.909 BaseBdev3 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.909 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.909 [ 00:10:39.909 { 00:10:39.909 "name": "BaseBdev3", 00:10:39.909 "aliases": [ 00:10:39.909 "2e6d7a44-336b-40ba-8376-58b329e67101" 00:10:39.909 ], 00:10:39.909 "product_name": "Malloc disk", 00:10:39.909 "block_size": 512, 00:10:39.909 "num_blocks": 65536, 00:10:39.909 "uuid": "2e6d7a44-336b-40ba-8376-58b329e67101", 00:10:39.909 "assigned_rate_limits": { 00:10:39.909 "rw_ios_per_sec": 0, 00:10:39.909 "rw_mbytes_per_sec": 0, 00:10:39.909 "r_mbytes_per_sec": 0, 00:10:39.909 "w_mbytes_per_sec": 0 00:10:39.909 }, 00:10:39.909 "claimed": true, 00:10:39.909 "claim_type": "exclusive_write", 00:10:39.909 "zoned": false, 00:10:39.909 "supported_io_types": { 00:10:39.909 "read": true, 00:10:39.909 "write": true, 00:10:39.909 "unmap": true, 00:10:39.909 "flush": true, 00:10:39.909 "reset": true, 00:10:39.909 "nvme_admin": false, 00:10:39.909 "nvme_io": false, 00:10:39.909 "nvme_io_md": false, 00:10:39.909 "write_zeroes": true, 00:10:39.909 "zcopy": true, 00:10:39.909 "get_zone_info": false, 00:10:39.909 "zone_management": false, 00:10:39.909 "zone_append": false, 00:10:39.909 "compare": false, 00:10:39.909 "compare_and_write": false, 00:10:39.909 "abort": true, 00:10:39.909 "seek_hole": false, 00:10:39.909 "seek_data": false, 00:10:39.909 "copy": true, 00:10:39.909 "nvme_iov_md": false 00:10:39.909 }, 00:10:39.909 "memory_domains": [ 00:10:39.909 { 00:10:39.910 "dma_device_id": "system", 00:10:39.910 "dma_device_type": 1 00:10:39.910 }, 00:10:39.910 { 00:10:39.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.910 "dma_device_type": 2 00:10:39.910 } 00:10:39.910 ], 00:10:39.910 "driver_specific": {} 00:10:39.910 } 00:10:39.910 ] 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.910 "name": "Existed_Raid", 00:10:39.910 "uuid": "768d17e7-be06-4663-b8d9-8f7ea3f543ab", 00:10:39.910 "strip_size_kb": 0, 00:10:39.910 "state": "online", 00:10:39.910 "raid_level": "raid1", 00:10:39.910 "superblock": false, 00:10:39.910 "num_base_bdevs": 3, 00:10:39.910 "num_base_bdevs_discovered": 3, 00:10:39.910 "num_base_bdevs_operational": 3, 00:10:39.910 "base_bdevs_list": [ 00:10:39.910 { 00:10:39.910 "name": "BaseBdev1", 00:10:39.910 "uuid": "b8199216-f522-498a-a6a3-0f4468722d0f", 00:10:39.910 "is_configured": true, 00:10:39.910 "data_offset": 0, 00:10:39.910 "data_size": 65536 00:10:39.910 }, 00:10:39.910 { 00:10:39.910 "name": "BaseBdev2", 00:10:39.910 "uuid": "1cbd2fbf-3828-473d-ab86-7a5c47716fce", 00:10:39.910 "is_configured": true, 00:10:39.910 "data_offset": 0, 00:10:39.910 "data_size": 65536 00:10:39.910 }, 00:10:39.910 { 00:10:39.910 "name": "BaseBdev3", 00:10:39.910 "uuid": "2e6d7a44-336b-40ba-8376-58b329e67101", 00:10:39.910 "is_configured": true, 00:10:39.910 "data_offset": 0, 00:10:39.910 "data_size": 65536 00:10:39.910 } 00:10:39.910 ] 00:10:39.910 }' 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.910 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.479 [2024-11-05 03:21:53.857353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.479 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.479 "name": "Existed_Raid", 00:10:40.479 "aliases": [ 00:10:40.479 "768d17e7-be06-4663-b8d9-8f7ea3f543ab" 00:10:40.479 ], 00:10:40.479 "product_name": "Raid Volume", 00:10:40.479 "block_size": 512, 00:10:40.479 "num_blocks": 65536, 00:10:40.479 "uuid": "768d17e7-be06-4663-b8d9-8f7ea3f543ab", 00:10:40.480 "assigned_rate_limits": { 00:10:40.480 "rw_ios_per_sec": 0, 00:10:40.480 "rw_mbytes_per_sec": 0, 00:10:40.480 "r_mbytes_per_sec": 0, 00:10:40.480 "w_mbytes_per_sec": 0 00:10:40.480 }, 00:10:40.480 "claimed": false, 00:10:40.480 "zoned": false, 00:10:40.480 "supported_io_types": { 00:10:40.480 "read": true, 00:10:40.480 "write": true, 00:10:40.480 "unmap": false, 00:10:40.480 "flush": false, 00:10:40.480 "reset": true, 00:10:40.480 "nvme_admin": false, 00:10:40.480 "nvme_io": false, 00:10:40.480 "nvme_io_md": false, 00:10:40.480 "write_zeroes": true, 00:10:40.480 "zcopy": false, 00:10:40.480 "get_zone_info": false, 00:10:40.480 "zone_management": false, 00:10:40.480 "zone_append": false, 00:10:40.480 "compare": false, 00:10:40.480 "compare_and_write": false, 00:10:40.480 "abort": false, 00:10:40.480 "seek_hole": false, 00:10:40.480 "seek_data": false, 00:10:40.480 "copy": false, 00:10:40.480 "nvme_iov_md": false 00:10:40.480 }, 00:10:40.480 "memory_domains": [ 00:10:40.480 { 00:10:40.480 "dma_device_id": "system", 00:10:40.480 "dma_device_type": 1 00:10:40.480 }, 00:10:40.480 { 00:10:40.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.480 "dma_device_type": 2 00:10:40.480 }, 00:10:40.480 { 00:10:40.480 "dma_device_id": "system", 00:10:40.480 "dma_device_type": 1 00:10:40.480 }, 00:10:40.480 { 00:10:40.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.480 "dma_device_type": 2 00:10:40.480 }, 00:10:40.480 { 00:10:40.480 "dma_device_id": "system", 00:10:40.480 "dma_device_type": 1 00:10:40.480 }, 00:10:40.480 { 00:10:40.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.480 "dma_device_type": 2 00:10:40.480 } 00:10:40.480 ], 00:10:40.480 "driver_specific": { 00:10:40.480 "raid": { 00:10:40.480 "uuid": "768d17e7-be06-4663-b8d9-8f7ea3f543ab", 00:10:40.480 "strip_size_kb": 0, 00:10:40.480 "state": "online", 00:10:40.480 "raid_level": "raid1", 00:10:40.480 "superblock": false, 00:10:40.480 "num_base_bdevs": 3, 00:10:40.480 "num_base_bdevs_discovered": 3, 00:10:40.480 "num_base_bdevs_operational": 3, 00:10:40.480 "base_bdevs_list": [ 00:10:40.480 { 00:10:40.480 "name": "BaseBdev1", 00:10:40.480 "uuid": "b8199216-f522-498a-a6a3-0f4468722d0f", 00:10:40.480 "is_configured": true, 00:10:40.480 "data_offset": 0, 00:10:40.480 "data_size": 65536 00:10:40.480 }, 00:10:40.480 { 00:10:40.480 "name": "BaseBdev2", 00:10:40.480 "uuid": "1cbd2fbf-3828-473d-ab86-7a5c47716fce", 00:10:40.480 "is_configured": true, 00:10:40.480 "data_offset": 0, 00:10:40.480 "data_size": 65536 00:10:40.480 }, 00:10:40.480 { 00:10:40.480 "name": "BaseBdev3", 00:10:40.480 "uuid": "2e6d7a44-336b-40ba-8376-58b329e67101", 00:10:40.480 "is_configured": true, 00:10:40.480 "data_offset": 0, 00:10:40.480 "data_size": 65536 00:10:40.480 } 00:10:40.480 ] 00:10:40.480 } 00:10:40.480 } 00:10:40.480 }' 00:10:40.480 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.480 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.480 BaseBdev2 00:10:40.480 BaseBdev3' 00:10:40.480 03:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.480 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.749 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.749 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.749 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.749 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.749 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.749 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.749 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.750 [2024-11-05 03:21:54.169099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.750 "name": "Existed_Raid", 00:10:40.750 "uuid": "768d17e7-be06-4663-b8d9-8f7ea3f543ab", 00:10:40.750 "strip_size_kb": 0, 00:10:40.750 "state": "online", 00:10:40.750 "raid_level": "raid1", 00:10:40.750 "superblock": false, 00:10:40.750 "num_base_bdevs": 3, 00:10:40.750 "num_base_bdevs_discovered": 2, 00:10:40.750 "num_base_bdevs_operational": 2, 00:10:40.750 "base_bdevs_list": [ 00:10:40.750 { 00:10:40.750 "name": null, 00:10:40.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.750 "is_configured": false, 00:10:40.750 "data_offset": 0, 00:10:40.750 "data_size": 65536 00:10:40.750 }, 00:10:40.750 { 00:10:40.750 "name": "BaseBdev2", 00:10:40.750 "uuid": "1cbd2fbf-3828-473d-ab86-7a5c47716fce", 00:10:40.750 "is_configured": true, 00:10:40.750 "data_offset": 0, 00:10:40.750 "data_size": 65536 00:10:40.750 }, 00:10:40.750 { 00:10:40.750 "name": "BaseBdev3", 00:10:40.750 "uuid": "2e6d7a44-336b-40ba-8376-58b329e67101", 00:10:40.750 "is_configured": true, 00:10:40.750 "data_offset": 0, 00:10:40.750 "data_size": 65536 00:10:40.750 } 00:10:40.750 ] 00:10:40.750 }' 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.750 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 [2024-11-05 03:21:54.815862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.317 03:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 [2024-11-05 03:21:54.959257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.577 [2024-11-05 03:21:54.959434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.577 [2024-11-05 03:21:55.037839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.577 [2024-11-05 03:21:55.038126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.577 [2024-11-05 03:21:55.038271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 BaseBdev2 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 [ 00:10:41.577 { 00:10:41.577 "name": "BaseBdev2", 00:10:41.577 "aliases": [ 00:10:41.577 "c29f5f79-5528-4c4a-8a05-20ef31da4dda" 00:10:41.577 ], 00:10:41.577 "product_name": "Malloc disk", 00:10:41.577 "block_size": 512, 00:10:41.577 "num_blocks": 65536, 00:10:41.577 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:41.577 "assigned_rate_limits": { 00:10:41.577 "rw_ios_per_sec": 0, 00:10:41.577 "rw_mbytes_per_sec": 0, 00:10:41.577 "r_mbytes_per_sec": 0, 00:10:41.577 "w_mbytes_per_sec": 0 00:10:41.577 }, 00:10:41.577 "claimed": false, 00:10:41.577 "zoned": false, 00:10:41.577 "supported_io_types": { 00:10:41.577 "read": true, 00:10:41.577 "write": true, 00:10:41.577 "unmap": true, 00:10:41.577 "flush": true, 00:10:41.577 "reset": true, 00:10:41.577 "nvme_admin": false, 00:10:41.577 "nvme_io": false, 00:10:41.577 "nvme_io_md": false, 00:10:41.577 "write_zeroes": true, 00:10:41.577 "zcopy": true, 00:10:41.577 "get_zone_info": false, 00:10:41.577 "zone_management": false, 00:10:41.577 "zone_append": false, 00:10:41.577 "compare": false, 00:10:41.577 "compare_and_write": false, 00:10:41.577 "abort": true, 00:10:41.577 "seek_hole": false, 00:10:41.577 "seek_data": false, 00:10:41.577 "copy": true, 00:10:41.577 "nvme_iov_md": false 00:10:41.577 }, 00:10:41.577 "memory_domains": [ 00:10:41.577 { 00:10:41.577 "dma_device_id": "system", 00:10:41.577 "dma_device_type": 1 00:10:41.577 }, 00:10:41.577 { 00:10:41.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.577 "dma_device_type": 2 00:10:41.577 } 00:10:41.577 ], 00:10:41.577 "driver_specific": {} 00:10:41.577 } 00:10:41.577 ] 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.577 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 BaseBdev3 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 [ 00:10:41.837 { 00:10:41.837 "name": "BaseBdev3", 00:10:41.837 "aliases": [ 00:10:41.837 "46c8d9f6-1207-4234-98df-f2f4aa809c0d" 00:10:41.837 ], 00:10:41.837 "product_name": "Malloc disk", 00:10:41.837 "block_size": 512, 00:10:41.837 "num_blocks": 65536, 00:10:41.837 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:41.837 "assigned_rate_limits": { 00:10:41.837 "rw_ios_per_sec": 0, 00:10:41.837 "rw_mbytes_per_sec": 0, 00:10:41.837 "r_mbytes_per_sec": 0, 00:10:41.837 "w_mbytes_per_sec": 0 00:10:41.837 }, 00:10:41.837 "claimed": false, 00:10:41.837 "zoned": false, 00:10:41.837 "supported_io_types": { 00:10:41.837 "read": true, 00:10:41.837 "write": true, 00:10:41.837 "unmap": true, 00:10:41.837 "flush": true, 00:10:41.837 "reset": true, 00:10:41.837 "nvme_admin": false, 00:10:41.837 "nvme_io": false, 00:10:41.837 "nvme_io_md": false, 00:10:41.837 "write_zeroes": true, 00:10:41.837 "zcopy": true, 00:10:41.837 "get_zone_info": false, 00:10:41.837 "zone_management": false, 00:10:41.837 "zone_append": false, 00:10:41.837 "compare": false, 00:10:41.837 "compare_and_write": false, 00:10:41.837 "abort": true, 00:10:41.837 "seek_hole": false, 00:10:41.837 "seek_data": false, 00:10:41.837 "copy": true, 00:10:41.837 "nvme_iov_md": false 00:10:41.837 }, 00:10:41.837 "memory_domains": [ 00:10:41.837 { 00:10:41.837 "dma_device_id": "system", 00:10:41.837 "dma_device_type": 1 00:10:41.837 }, 00:10:41.837 { 00:10:41.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.837 "dma_device_type": 2 00:10:41.837 } 00:10:41.837 ], 00:10:41.837 "driver_specific": {} 00:10:41.837 } 00:10:41.837 ] 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 [2024-11-05 03:21:55.256600] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.837 [2024-11-05 03:21:55.256659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.837 [2024-11-05 03:21:55.256685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.837 [2024-11-05 03:21:55.259154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.837 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.837 "name": "Existed_Raid", 00:10:41.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.837 "strip_size_kb": 0, 00:10:41.837 "state": "configuring", 00:10:41.837 "raid_level": "raid1", 00:10:41.837 "superblock": false, 00:10:41.837 "num_base_bdevs": 3, 00:10:41.837 "num_base_bdevs_discovered": 2, 00:10:41.837 "num_base_bdevs_operational": 3, 00:10:41.837 "base_bdevs_list": [ 00:10:41.837 { 00:10:41.837 "name": "BaseBdev1", 00:10:41.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.837 "is_configured": false, 00:10:41.837 "data_offset": 0, 00:10:41.837 "data_size": 0 00:10:41.837 }, 00:10:41.837 { 00:10:41.837 "name": "BaseBdev2", 00:10:41.837 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:41.837 "is_configured": true, 00:10:41.837 "data_offset": 0, 00:10:41.837 "data_size": 65536 00:10:41.837 }, 00:10:41.837 { 00:10:41.837 "name": "BaseBdev3", 00:10:41.837 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:41.837 "is_configured": true, 00:10:41.837 "data_offset": 0, 00:10:41.837 "data_size": 65536 00:10:41.837 } 00:10:41.837 ] 00:10:41.837 }' 00:10:41.838 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.838 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.405 [2024-11-05 03:21:55.792786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.405 "name": "Existed_Raid", 00:10:42.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.405 "strip_size_kb": 0, 00:10:42.405 "state": "configuring", 00:10:42.405 "raid_level": "raid1", 00:10:42.405 "superblock": false, 00:10:42.405 "num_base_bdevs": 3, 00:10:42.405 "num_base_bdevs_discovered": 1, 00:10:42.405 "num_base_bdevs_operational": 3, 00:10:42.405 "base_bdevs_list": [ 00:10:42.405 { 00:10:42.405 "name": "BaseBdev1", 00:10:42.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.405 "is_configured": false, 00:10:42.405 "data_offset": 0, 00:10:42.405 "data_size": 0 00:10:42.405 }, 00:10:42.405 { 00:10:42.405 "name": null, 00:10:42.405 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:42.405 "is_configured": false, 00:10:42.405 "data_offset": 0, 00:10:42.405 "data_size": 65536 00:10:42.405 }, 00:10:42.405 { 00:10:42.405 "name": "BaseBdev3", 00:10:42.405 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:42.405 "is_configured": true, 00:10:42.405 "data_offset": 0, 00:10:42.405 "data_size": 65536 00:10:42.405 } 00:10:42.405 ] 00:10:42.405 }' 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.405 03:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.973 [2024-11-05 03:21:56.410584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.973 BaseBdev1 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.973 [ 00:10:42.973 { 00:10:42.973 "name": "BaseBdev1", 00:10:42.973 "aliases": [ 00:10:42.973 "24f3a725-519d-41ee-844f-77782aa4b075" 00:10:42.973 ], 00:10:42.973 "product_name": "Malloc disk", 00:10:42.973 "block_size": 512, 00:10:42.973 "num_blocks": 65536, 00:10:42.973 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:42.973 "assigned_rate_limits": { 00:10:42.973 "rw_ios_per_sec": 0, 00:10:42.973 "rw_mbytes_per_sec": 0, 00:10:42.973 "r_mbytes_per_sec": 0, 00:10:42.973 "w_mbytes_per_sec": 0 00:10:42.973 }, 00:10:42.973 "claimed": true, 00:10:42.973 "claim_type": "exclusive_write", 00:10:42.973 "zoned": false, 00:10:42.973 "supported_io_types": { 00:10:42.973 "read": true, 00:10:42.973 "write": true, 00:10:42.973 "unmap": true, 00:10:42.973 "flush": true, 00:10:42.973 "reset": true, 00:10:42.973 "nvme_admin": false, 00:10:42.973 "nvme_io": false, 00:10:42.973 "nvme_io_md": false, 00:10:42.973 "write_zeroes": true, 00:10:42.973 "zcopy": true, 00:10:42.973 "get_zone_info": false, 00:10:42.973 "zone_management": false, 00:10:42.973 "zone_append": false, 00:10:42.973 "compare": false, 00:10:42.973 "compare_and_write": false, 00:10:42.973 "abort": true, 00:10:42.973 "seek_hole": false, 00:10:42.973 "seek_data": false, 00:10:42.973 "copy": true, 00:10:42.973 "nvme_iov_md": false 00:10:42.973 }, 00:10:42.973 "memory_domains": [ 00:10:42.973 { 00:10:42.973 "dma_device_id": "system", 00:10:42.973 "dma_device_type": 1 00:10:42.973 }, 00:10:42.973 { 00:10:42.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.973 "dma_device_type": 2 00:10:42.973 } 00:10:42.973 ], 00:10:42.973 "driver_specific": {} 00:10:42.973 } 00:10:42.973 ] 00:10:42.973 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.974 "name": "Existed_Raid", 00:10:42.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.974 "strip_size_kb": 0, 00:10:42.974 "state": "configuring", 00:10:42.974 "raid_level": "raid1", 00:10:42.974 "superblock": false, 00:10:42.974 "num_base_bdevs": 3, 00:10:42.974 "num_base_bdevs_discovered": 2, 00:10:42.974 "num_base_bdevs_operational": 3, 00:10:42.974 "base_bdevs_list": [ 00:10:42.974 { 00:10:42.974 "name": "BaseBdev1", 00:10:42.974 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:42.974 "is_configured": true, 00:10:42.974 "data_offset": 0, 00:10:42.974 "data_size": 65536 00:10:42.974 }, 00:10:42.974 { 00:10:42.974 "name": null, 00:10:42.974 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:42.974 "is_configured": false, 00:10:42.974 "data_offset": 0, 00:10:42.974 "data_size": 65536 00:10:42.974 }, 00:10:42.974 { 00:10:42.974 "name": "BaseBdev3", 00:10:42.974 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:42.974 "is_configured": true, 00:10:42.974 "data_offset": 0, 00:10:42.974 "data_size": 65536 00:10:42.974 } 00:10:42.974 ] 00:10:42.974 }' 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.974 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.541 03:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.541 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.541 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 03:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 [2024-11-05 03:21:57.010797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.541 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.541 "name": "Existed_Raid", 00:10:43.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.541 "strip_size_kb": 0, 00:10:43.541 "state": "configuring", 00:10:43.541 "raid_level": "raid1", 00:10:43.541 "superblock": false, 00:10:43.541 "num_base_bdevs": 3, 00:10:43.541 "num_base_bdevs_discovered": 1, 00:10:43.541 "num_base_bdevs_operational": 3, 00:10:43.541 "base_bdevs_list": [ 00:10:43.541 { 00:10:43.541 "name": "BaseBdev1", 00:10:43.542 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:43.542 "is_configured": true, 00:10:43.542 "data_offset": 0, 00:10:43.542 "data_size": 65536 00:10:43.542 }, 00:10:43.542 { 00:10:43.542 "name": null, 00:10:43.542 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:43.542 "is_configured": false, 00:10:43.542 "data_offset": 0, 00:10:43.542 "data_size": 65536 00:10:43.542 }, 00:10:43.542 { 00:10:43.542 "name": null, 00:10:43.542 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:43.542 "is_configured": false, 00:10:43.542 "data_offset": 0, 00:10:43.542 "data_size": 65536 00:10:43.542 } 00:10:43.542 ] 00:10:43.542 }' 00:10:43.542 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.542 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.109 [2024-11-05 03:21:57.555044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.109 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.110 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.110 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.110 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.110 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.110 "name": "Existed_Raid", 00:10:44.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.110 "strip_size_kb": 0, 00:10:44.110 "state": "configuring", 00:10:44.110 "raid_level": "raid1", 00:10:44.110 "superblock": false, 00:10:44.110 "num_base_bdevs": 3, 00:10:44.110 "num_base_bdevs_discovered": 2, 00:10:44.110 "num_base_bdevs_operational": 3, 00:10:44.110 "base_bdevs_list": [ 00:10:44.110 { 00:10:44.110 "name": "BaseBdev1", 00:10:44.110 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:44.110 "is_configured": true, 00:10:44.110 "data_offset": 0, 00:10:44.110 "data_size": 65536 00:10:44.110 }, 00:10:44.110 { 00:10:44.110 "name": null, 00:10:44.110 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:44.110 "is_configured": false, 00:10:44.110 "data_offset": 0, 00:10:44.110 "data_size": 65536 00:10:44.110 }, 00:10:44.110 { 00:10:44.110 "name": "BaseBdev3", 00:10:44.110 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:44.110 "is_configured": true, 00:10:44.110 "data_offset": 0, 00:10:44.110 "data_size": 65536 00:10:44.110 } 00:10:44.110 ] 00:10:44.110 }' 00:10:44.110 03:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.110 03:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.677 [2024-11-05 03:21:58.131177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.677 "name": "Existed_Raid", 00:10:44.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.677 "strip_size_kb": 0, 00:10:44.677 "state": "configuring", 00:10:44.677 "raid_level": "raid1", 00:10:44.677 "superblock": false, 00:10:44.677 "num_base_bdevs": 3, 00:10:44.677 "num_base_bdevs_discovered": 1, 00:10:44.677 "num_base_bdevs_operational": 3, 00:10:44.677 "base_bdevs_list": [ 00:10:44.677 { 00:10:44.677 "name": null, 00:10:44.677 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:44.677 "is_configured": false, 00:10:44.677 "data_offset": 0, 00:10:44.677 "data_size": 65536 00:10:44.677 }, 00:10:44.677 { 00:10:44.677 "name": null, 00:10:44.677 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:44.677 "is_configured": false, 00:10:44.677 "data_offset": 0, 00:10:44.677 "data_size": 65536 00:10:44.677 }, 00:10:44.677 { 00:10:44.677 "name": "BaseBdev3", 00:10:44.677 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:44.677 "is_configured": true, 00:10:44.677 "data_offset": 0, 00:10:44.677 "data_size": 65536 00:10:44.677 } 00:10:44.677 ] 00:10:44.677 }' 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.677 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.279 [2024-11-05 03:21:58.797408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.279 "name": "Existed_Raid", 00:10:45.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.279 "strip_size_kb": 0, 00:10:45.279 "state": "configuring", 00:10:45.279 "raid_level": "raid1", 00:10:45.279 "superblock": false, 00:10:45.279 "num_base_bdevs": 3, 00:10:45.279 "num_base_bdevs_discovered": 2, 00:10:45.279 "num_base_bdevs_operational": 3, 00:10:45.279 "base_bdevs_list": [ 00:10:45.279 { 00:10:45.279 "name": null, 00:10:45.279 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:45.279 "is_configured": false, 00:10:45.279 "data_offset": 0, 00:10:45.279 "data_size": 65536 00:10:45.279 }, 00:10:45.279 { 00:10:45.279 "name": "BaseBdev2", 00:10:45.279 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:45.279 "is_configured": true, 00:10:45.279 "data_offset": 0, 00:10:45.279 "data_size": 65536 00:10:45.279 }, 00:10:45.279 { 00:10:45.279 "name": "BaseBdev3", 00:10:45.279 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:45.279 "is_configured": true, 00:10:45.279 "data_offset": 0, 00:10:45.279 "data_size": 65536 00:10:45.279 } 00:10:45.279 ] 00:10:45.279 }' 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.279 03:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 24f3a725-519d-41ee-844f-77782aa4b075 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.883 [2024-11-05 03:21:59.471800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.883 [2024-11-05 03:21:59.471877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.883 [2024-11-05 03:21:59.471888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:45.883 [2024-11-05 03:21:59.472194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:45.883 [2024-11-05 03:21:59.472410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.883 [2024-11-05 03:21:59.472444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:45.883 [2024-11-05 03:21:59.472722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.883 NewBaseBdev 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.883 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.884 [ 00:10:45.884 { 00:10:45.884 "name": "NewBaseBdev", 00:10:45.884 "aliases": [ 00:10:45.884 "24f3a725-519d-41ee-844f-77782aa4b075" 00:10:45.884 ], 00:10:45.884 "product_name": "Malloc disk", 00:10:45.884 "block_size": 512, 00:10:45.884 "num_blocks": 65536, 00:10:45.884 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:45.884 "assigned_rate_limits": { 00:10:45.884 "rw_ios_per_sec": 0, 00:10:45.884 "rw_mbytes_per_sec": 0, 00:10:45.884 "r_mbytes_per_sec": 0, 00:10:45.884 "w_mbytes_per_sec": 0 00:10:45.884 }, 00:10:45.884 "claimed": true, 00:10:45.884 "claim_type": "exclusive_write", 00:10:45.884 "zoned": false, 00:10:45.884 "supported_io_types": { 00:10:45.884 "read": true, 00:10:45.884 "write": true, 00:10:45.884 "unmap": true, 00:10:45.884 "flush": true, 00:10:45.884 "reset": true, 00:10:45.884 "nvme_admin": false, 00:10:45.884 "nvme_io": false, 00:10:45.884 "nvme_io_md": false, 00:10:45.884 "write_zeroes": true, 00:10:45.884 "zcopy": true, 00:10:45.884 "get_zone_info": false, 00:10:45.884 "zone_management": false, 00:10:45.884 "zone_append": false, 00:10:45.884 "compare": false, 00:10:45.884 "compare_and_write": false, 00:10:45.884 "abort": true, 00:10:45.884 "seek_hole": false, 00:10:45.884 "seek_data": false, 00:10:45.884 "copy": true, 00:10:45.884 "nvme_iov_md": false 00:10:45.884 }, 00:10:45.884 "memory_domains": [ 00:10:45.884 { 00:10:45.884 "dma_device_id": "system", 00:10:45.884 "dma_device_type": 1 00:10:45.884 }, 00:10:45.884 { 00:10:45.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.884 "dma_device_type": 2 00:10:45.884 } 00:10:45.884 ], 00:10:45.884 "driver_specific": {} 00:10:45.884 } 00:10:45.884 ] 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.884 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.143 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.143 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.143 "name": "Existed_Raid", 00:10:46.143 "uuid": "e62c5b5a-ecb6-45a4-af19-ffb8100f3e6b", 00:10:46.143 "strip_size_kb": 0, 00:10:46.143 "state": "online", 00:10:46.143 "raid_level": "raid1", 00:10:46.143 "superblock": false, 00:10:46.143 "num_base_bdevs": 3, 00:10:46.143 "num_base_bdevs_discovered": 3, 00:10:46.143 "num_base_bdevs_operational": 3, 00:10:46.143 "base_bdevs_list": [ 00:10:46.143 { 00:10:46.143 "name": "NewBaseBdev", 00:10:46.143 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:46.143 "is_configured": true, 00:10:46.143 "data_offset": 0, 00:10:46.143 "data_size": 65536 00:10:46.143 }, 00:10:46.143 { 00:10:46.143 "name": "BaseBdev2", 00:10:46.143 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:46.143 "is_configured": true, 00:10:46.143 "data_offset": 0, 00:10:46.143 "data_size": 65536 00:10:46.143 }, 00:10:46.143 { 00:10:46.143 "name": "BaseBdev3", 00:10:46.143 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:46.143 "is_configured": true, 00:10:46.143 "data_offset": 0, 00:10:46.143 "data_size": 65536 00:10:46.143 } 00:10:46.143 ] 00:10:46.143 }' 00:10:46.143 03:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.143 03:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.402 [2024-11-05 03:22:00.016281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.402 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.661 "name": "Existed_Raid", 00:10:46.661 "aliases": [ 00:10:46.661 "e62c5b5a-ecb6-45a4-af19-ffb8100f3e6b" 00:10:46.661 ], 00:10:46.661 "product_name": "Raid Volume", 00:10:46.661 "block_size": 512, 00:10:46.661 "num_blocks": 65536, 00:10:46.661 "uuid": "e62c5b5a-ecb6-45a4-af19-ffb8100f3e6b", 00:10:46.661 "assigned_rate_limits": { 00:10:46.661 "rw_ios_per_sec": 0, 00:10:46.661 "rw_mbytes_per_sec": 0, 00:10:46.661 "r_mbytes_per_sec": 0, 00:10:46.661 "w_mbytes_per_sec": 0 00:10:46.661 }, 00:10:46.661 "claimed": false, 00:10:46.661 "zoned": false, 00:10:46.661 "supported_io_types": { 00:10:46.661 "read": true, 00:10:46.661 "write": true, 00:10:46.661 "unmap": false, 00:10:46.661 "flush": false, 00:10:46.661 "reset": true, 00:10:46.661 "nvme_admin": false, 00:10:46.661 "nvme_io": false, 00:10:46.661 "nvme_io_md": false, 00:10:46.661 "write_zeroes": true, 00:10:46.661 "zcopy": false, 00:10:46.661 "get_zone_info": false, 00:10:46.661 "zone_management": false, 00:10:46.661 "zone_append": false, 00:10:46.661 "compare": false, 00:10:46.661 "compare_and_write": false, 00:10:46.661 "abort": false, 00:10:46.661 "seek_hole": false, 00:10:46.661 "seek_data": false, 00:10:46.661 "copy": false, 00:10:46.661 "nvme_iov_md": false 00:10:46.661 }, 00:10:46.661 "memory_domains": [ 00:10:46.661 { 00:10:46.661 "dma_device_id": "system", 00:10:46.661 "dma_device_type": 1 00:10:46.661 }, 00:10:46.661 { 00:10:46.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.661 "dma_device_type": 2 00:10:46.661 }, 00:10:46.661 { 00:10:46.661 "dma_device_id": "system", 00:10:46.661 "dma_device_type": 1 00:10:46.661 }, 00:10:46.661 { 00:10:46.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.661 "dma_device_type": 2 00:10:46.661 }, 00:10:46.661 { 00:10:46.661 "dma_device_id": "system", 00:10:46.661 "dma_device_type": 1 00:10:46.661 }, 00:10:46.661 { 00:10:46.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.661 "dma_device_type": 2 00:10:46.661 } 00:10:46.661 ], 00:10:46.661 "driver_specific": { 00:10:46.661 "raid": { 00:10:46.661 "uuid": "e62c5b5a-ecb6-45a4-af19-ffb8100f3e6b", 00:10:46.661 "strip_size_kb": 0, 00:10:46.661 "state": "online", 00:10:46.661 "raid_level": "raid1", 00:10:46.661 "superblock": false, 00:10:46.661 "num_base_bdevs": 3, 00:10:46.661 "num_base_bdevs_discovered": 3, 00:10:46.661 "num_base_bdevs_operational": 3, 00:10:46.661 "base_bdevs_list": [ 00:10:46.661 { 00:10:46.661 "name": "NewBaseBdev", 00:10:46.661 "uuid": "24f3a725-519d-41ee-844f-77782aa4b075", 00:10:46.661 "is_configured": true, 00:10:46.661 "data_offset": 0, 00:10:46.661 "data_size": 65536 00:10:46.661 }, 00:10:46.661 { 00:10:46.661 "name": "BaseBdev2", 00:10:46.661 "uuid": "c29f5f79-5528-4c4a-8a05-20ef31da4dda", 00:10:46.661 "is_configured": true, 00:10:46.661 "data_offset": 0, 00:10:46.661 "data_size": 65536 00:10:46.661 }, 00:10:46.661 { 00:10:46.661 "name": "BaseBdev3", 00:10:46.661 "uuid": "46c8d9f6-1207-4234-98df-f2f4aa809c0d", 00:10:46.661 "is_configured": true, 00:10:46.661 "data_offset": 0, 00:10:46.661 "data_size": 65536 00:10:46.661 } 00:10:46.661 ] 00:10:46.661 } 00:10:46.662 } 00:10:46.662 }' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.662 BaseBdev2 00:10:46.662 BaseBdev3' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 [2024-11-05 03:22:00.328018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.921 [2024-11-05 03:22:00.328067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.921 [2024-11-05 03:22:00.328135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.921 [2024-11-05 03:22:00.328535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.921 [2024-11-05 03:22:00.328561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67215 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67215 ']' 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67215 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67215 00:10:46.921 killing process with pid 67215 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67215' 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67215 00:10:46.921 [2024-11-05 03:22:00.365277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.921 03:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67215 00:10:47.180 [2024-11-05 03:22:00.585139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.116 ************************************ 00:10:48.116 END TEST raid_state_function_test 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.116 00:10:48.116 real 0m11.457s 00:10:48.116 user 0m19.139s 00:10:48.116 sys 0m1.578s 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.116 ************************************ 00:10:48.116 03:22:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:48.116 03:22:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:48.116 03:22:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.116 03:22:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.116 ************************************ 00:10:48.116 START TEST raid_state_function_test_sb 00:10:48.116 ************************************ 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67842 00:10:48.116 Process raid pid: 67842 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67842' 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67842 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 67842 ']' 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.116 03:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.116 [2024-11-05 03:22:01.663755] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:48.116 [2024-11-05 03:22:01.663992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.375 [2024-11-05 03:22:01.844648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.375 [2024-11-05 03:22:01.966539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.635 [2024-11-05 03:22:02.150258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.635 [2024-11-05 03:22:02.150326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.202 [2024-11-05 03:22:02.589962] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.202 [2024-11-05 03:22:02.590270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.202 [2024-11-05 03:22:02.590292] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.202 [2024-11-05 03:22:02.590416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.202 [2024-11-05 03:22:02.590435] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.202 [2024-11-05 03:22:02.590517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.202 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.203 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.203 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.203 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.203 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.203 "name": "Existed_Raid", 00:10:49.203 "uuid": "19f8f23b-2be3-431d-afa5-927df35eb31b", 00:10:49.203 "strip_size_kb": 0, 00:10:49.203 "state": "configuring", 00:10:49.203 "raid_level": "raid1", 00:10:49.203 "superblock": true, 00:10:49.203 "num_base_bdevs": 3, 00:10:49.203 "num_base_bdevs_discovered": 0, 00:10:49.203 "num_base_bdevs_operational": 3, 00:10:49.203 "base_bdevs_list": [ 00:10:49.203 { 00:10:49.203 "name": "BaseBdev1", 00:10:49.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.203 "is_configured": false, 00:10:49.203 "data_offset": 0, 00:10:49.203 "data_size": 0 00:10:49.203 }, 00:10:49.203 { 00:10:49.203 "name": "BaseBdev2", 00:10:49.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.203 "is_configured": false, 00:10:49.203 "data_offset": 0, 00:10:49.203 "data_size": 0 00:10:49.203 }, 00:10:49.203 { 00:10:49.203 "name": "BaseBdev3", 00:10:49.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.203 "is_configured": false, 00:10:49.203 "data_offset": 0, 00:10:49.203 "data_size": 0 00:10:49.203 } 00:10:49.203 ] 00:10:49.203 }' 00:10:49.203 03:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.203 03:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.462 [2024-11-05 03:22:03.062079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.462 [2024-11-05 03:22:03.062124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.462 [2024-11-05 03:22:03.070043] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.462 [2024-11-05 03:22:03.070112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.462 [2024-11-05 03:22:03.070126] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.462 [2024-11-05 03:22:03.070141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.462 [2024-11-05 03:22:03.070150] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.462 [2024-11-05 03:22:03.070163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.462 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.731 [2024-11-05 03:22:03.112296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.731 BaseBdev1 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.731 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.731 [ 00:10:49.731 { 00:10:49.731 "name": "BaseBdev1", 00:10:49.731 "aliases": [ 00:10:49.731 "772dc9d9-043e-4ed0-b464-bd725f02e69a" 00:10:49.731 ], 00:10:49.731 "product_name": "Malloc disk", 00:10:49.731 "block_size": 512, 00:10:49.731 "num_blocks": 65536, 00:10:49.731 "uuid": "772dc9d9-043e-4ed0-b464-bd725f02e69a", 00:10:49.731 "assigned_rate_limits": { 00:10:49.731 "rw_ios_per_sec": 0, 00:10:49.731 "rw_mbytes_per_sec": 0, 00:10:49.731 "r_mbytes_per_sec": 0, 00:10:49.731 "w_mbytes_per_sec": 0 00:10:49.731 }, 00:10:49.731 "claimed": true, 00:10:49.731 "claim_type": "exclusive_write", 00:10:49.731 "zoned": false, 00:10:49.731 "supported_io_types": { 00:10:49.731 "read": true, 00:10:49.731 "write": true, 00:10:49.731 "unmap": true, 00:10:49.731 "flush": true, 00:10:49.731 "reset": true, 00:10:49.731 "nvme_admin": false, 00:10:49.731 "nvme_io": false, 00:10:49.731 "nvme_io_md": false, 00:10:49.731 "write_zeroes": true, 00:10:49.731 "zcopy": true, 00:10:49.731 "get_zone_info": false, 00:10:49.731 "zone_management": false, 00:10:49.731 "zone_append": false, 00:10:49.732 "compare": false, 00:10:49.732 "compare_and_write": false, 00:10:49.732 "abort": true, 00:10:49.732 "seek_hole": false, 00:10:49.732 "seek_data": false, 00:10:49.732 "copy": true, 00:10:49.732 "nvme_iov_md": false 00:10:49.732 }, 00:10:49.732 "memory_domains": [ 00:10:49.732 { 00:10:49.732 "dma_device_id": "system", 00:10:49.732 "dma_device_type": 1 00:10:49.732 }, 00:10:49.732 { 00:10:49.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.732 "dma_device_type": 2 00:10:49.732 } 00:10:49.732 ], 00:10:49.732 "driver_specific": {} 00:10:49.732 } 00:10:49.732 ] 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.732 "name": "Existed_Raid", 00:10:49.732 "uuid": "fe86d969-9af5-40f2-9e06-9a170458e356", 00:10:49.732 "strip_size_kb": 0, 00:10:49.732 "state": "configuring", 00:10:49.732 "raid_level": "raid1", 00:10:49.732 "superblock": true, 00:10:49.732 "num_base_bdevs": 3, 00:10:49.732 "num_base_bdevs_discovered": 1, 00:10:49.732 "num_base_bdevs_operational": 3, 00:10:49.732 "base_bdevs_list": [ 00:10:49.732 { 00:10:49.732 "name": "BaseBdev1", 00:10:49.732 "uuid": "772dc9d9-043e-4ed0-b464-bd725f02e69a", 00:10:49.732 "is_configured": true, 00:10:49.732 "data_offset": 2048, 00:10:49.732 "data_size": 63488 00:10:49.732 }, 00:10:49.732 { 00:10:49.732 "name": "BaseBdev2", 00:10:49.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.732 "is_configured": false, 00:10:49.732 "data_offset": 0, 00:10:49.732 "data_size": 0 00:10:49.732 }, 00:10:49.732 { 00:10:49.732 "name": "BaseBdev3", 00:10:49.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.732 "is_configured": false, 00:10:49.732 "data_offset": 0, 00:10:49.732 "data_size": 0 00:10:49.732 } 00:10:49.732 ] 00:10:49.732 }' 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.732 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.300 [2024-11-05 03:22:03.656527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.300 [2024-11-05 03:22:03.656607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.300 [2024-11-05 03:22:03.664595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.300 [2024-11-05 03:22:03.667024] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.300 [2024-11-05 03:22:03.667095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.300 [2024-11-05 03:22:03.667111] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.300 [2024-11-05 03:22:03.667125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.300 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.300 "name": "Existed_Raid", 00:10:50.300 "uuid": "374eccf0-91ea-4f70-8b0a-1f68e4347f5f", 00:10:50.300 "strip_size_kb": 0, 00:10:50.300 "state": "configuring", 00:10:50.300 "raid_level": "raid1", 00:10:50.300 "superblock": true, 00:10:50.300 "num_base_bdevs": 3, 00:10:50.300 "num_base_bdevs_discovered": 1, 00:10:50.300 "num_base_bdevs_operational": 3, 00:10:50.300 "base_bdevs_list": [ 00:10:50.300 { 00:10:50.300 "name": "BaseBdev1", 00:10:50.300 "uuid": "772dc9d9-043e-4ed0-b464-bd725f02e69a", 00:10:50.300 "is_configured": true, 00:10:50.301 "data_offset": 2048, 00:10:50.301 "data_size": 63488 00:10:50.301 }, 00:10:50.301 { 00:10:50.301 "name": "BaseBdev2", 00:10:50.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.301 "is_configured": false, 00:10:50.301 "data_offset": 0, 00:10:50.301 "data_size": 0 00:10:50.301 }, 00:10:50.301 { 00:10:50.301 "name": "BaseBdev3", 00:10:50.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.301 "is_configured": false, 00:10:50.301 "data_offset": 0, 00:10:50.301 "data_size": 0 00:10:50.301 } 00:10:50.301 ] 00:10:50.301 }' 00:10:50.301 03:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.301 03:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.559 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.559 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.559 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.819 [2024-11-05 03:22:04.217008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.819 BaseBdev2 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.819 [ 00:10:50.819 { 00:10:50.819 "name": "BaseBdev2", 00:10:50.819 "aliases": [ 00:10:50.819 "8721c73f-8ad9-48b4-8b3e-55f7de12ae9a" 00:10:50.819 ], 00:10:50.819 "product_name": "Malloc disk", 00:10:50.819 "block_size": 512, 00:10:50.819 "num_blocks": 65536, 00:10:50.819 "uuid": "8721c73f-8ad9-48b4-8b3e-55f7de12ae9a", 00:10:50.819 "assigned_rate_limits": { 00:10:50.819 "rw_ios_per_sec": 0, 00:10:50.819 "rw_mbytes_per_sec": 0, 00:10:50.819 "r_mbytes_per_sec": 0, 00:10:50.819 "w_mbytes_per_sec": 0 00:10:50.819 }, 00:10:50.819 "claimed": true, 00:10:50.819 "claim_type": "exclusive_write", 00:10:50.819 "zoned": false, 00:10:50.819 "supported_io_types": { 00:10:50.819 "read": true, 00:10:50.819 "write": true, 00:10:50.819 "unmap": true, 00:10:50.819 "flush": true, 00:10:50.819 "reset": true, 00:10:50.819 "nvme_admin": false, 00:10:50.819 "nvme_io": false, 00:10:50.819 "nvme_io_md": false, 00:10:50.819 "write_zeroes": true, 00:10:50.819 "zcopy": true, 00:10:50.819 "get_zone_info": false, 00:10:50.819 "zone_management": false, 00:10:50.819 "zone_append": false, 00:10:50.819 "compare": false, 00:10:50.819 "compare_and_write": false, 00:10:50.819 "abort": true, 00:10:50.819 "seek_hole": false, 00:10:50.819 "seek_data": false, 00:10:50.819 "copy": true, 00:10:50.819 "nvme_iov_md": false 00:10:50.819 }, 00:10:50.819 "memory_domains": [ 00:10:50.819 { 00:10:50.819 "dma_device_id": "system", 00:10:50.819 "dma_device_type": 1 00:10:50.819 }, 00:10:50.819 { 00:10:50.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.819 "dma_device_type": 2 00:10:50.819 } 00:10:50.819 ], 00:10:50.819 "driver_specific": {} 00:10:50.819 } 00:10:50.819 ] 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.819 "name": "Existed_Raid", 00:10:50.819 "uuid": "374eccf0-91ea-4f70-8b0a-1f68e4347f5f", 00:10:50.819 "strip_size_kb": 0, 00:10:50.819 "state": "configuring", 00:10:50.819 "raid_level": "raid1", 00:10:50.819 "superblock": true, 00:10:50.819 "num_base_bdevs": 3, 00:10:50.819 "num_base_bdevs_discovered": 2, 00:10:50.819 "num_base_bdevs_operational": 3, 00:10:50.819 "base_bdevs_list": [ 00:10:50.819 { 00:10:50.819 "name": "BaseBdev1", 00:10:50.819 "uuid": "772dc9d9-043e-4ed0-b464-bd725f02e69a", 00:10:50.819 "is_configured": true, 00:10:50.819 "data_offset": 2048, 00:10:50.819 "data_size": 63488 00:10:50.819 }, 00:10:50.819 { 00:10:50.819 "name": "BaseBdev2", 00:10:50.819 "uuid": "8721c73f-8ad9-48b4-8b3e-55f7de12ae9a", 00:10:50.819 "is_configured": true, 00:10:50.819 "data_offset": 2048, 00:10:50.819 "data_size": 63488 00:10:50.819 }, 00:10:50.819 { 00:10:50.819 "name": "BaseBdev3", 00:10:50.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.819 "is_configured": false, 00:10:50.819 "data_offset": 0, 00:10:50.819 "data_size": 0 00:10:50.819 } 00:10:50.819 ] 00:10:50.819 }' 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.819 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.387 [2024-11-05 03:22:04.807825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.387 [2024-11-05 03:22:04.808141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.387 [2024-11-05 03:22:04.808168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.387 BaseBdev3 00:10:51.387 [2024-11-05 03:22:04.808590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:51.387 [2024-11-05 03:22:04.808812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.387 [2024-11-05 03:22:04.808828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:51.387 [2024-11-05 03:22:04.809028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.387 [ 00:10:51.387 { 00:10:51.387 "name": "BaseBdev3", 00:10:51.387 "aliases": [ 00:10:51.387 "1ee4caf8-000c-4bee-a181-d591d4ebca3e" 00:10:51.387 ], 00:10:51.387 "product_name": "Malloc disk", 00:10:51.387 "block_size": 512, 00:10:51.387 "num_blocks": 65536, 00:10:51.387 "uuid": "1ee4caf8-000c-4bee-a181-d591d4ebca3e", 00:10:51.387 "assigned_rate_limits": { 00:10:51.387 "rw_ios_per_sec": 0, 00:10:51.387 "rw_mbytes_per_sec": 0, 00:10:51.387 "r_mbytes_per_sec": 0, 00:10:51.387 "w_mbytes_per_sec": 0 00:10:51.387 }, 00:10:51.387 "claimed": true, 00:10:51.387 "claim_type": "exclusive_write", 00:10:51.387 "zoned": false, 00:10:51.387 "supported_io_types": { 00:10:51.387 "read": true, 00:10:51.387 "write": true, 00:10:51.387 "unmap": true, 00:10:51.387 "flush": true, 00:10:51.387 "reset": true, 00:10:51.387 "nvme_admin": false, 00:10:51.387 "nvme_io": false, 00:10:51.387 "nvme_io_md": false, 00:10:51.387 "write_zeroes": true, 00:10:51.387 "zcopy": true, 00:10:51.387 "get_zone_info": false, 00:10:51.387 "zone_management": false, 00:10:51.387 "zone_append": false, 00:10:51.387 "compare": false, 00:10:51.387 "compare_and_write": false, 00:10:51.387 "abort": true, 00:10:51.387 "seek_hole": false, 00:10:51.387 "seek_data": false, 00:10:51.387 "copy": true, 00:10:51.387 "nvme_iov_md": false 00:10:51.387 }, 00:10:51.387 "memory_domains": [ 00:10:51.387 { 00:10:51.387 "dma_device_id": "system", 00:10:51.387 "dma_device_type": 1 00:10:51.387 }, 00:10:51.387 { 00:10:51.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.387 "dma_device_type": 2 00:10:51.387 } 00:10:51.387 ], 00:10:51.387 "driver_specific": {} 00:10:51.387 } 00:10:51.387 ] 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.387 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.387 "name": "Existed_Raid", 00:10:51.387 "uuid": "374eccf0-91ea-4f70-8b0a-1f68e4347f5f", 00:10:51.387 "strip_size_kb": 0, 00:10:51.387 "state": "online", 00:10:51.387 "raid_level": "raid1", 00:10:51.387 "superblock": true, 00:10:51.387 "num_base_bdevs": 3, 00:10:51.387 "num_base_bdevs_discovered": 3, 00:10:51.387 "num_base_bdevs_operational": 3, 00:10:51.387 "base_bdevs_list": [ 00:10:51.387 { 00:10:51.387 "name": "BaseBdev1", 00:10:51.387 "uuid": "772dc9d9-043e-4ed0-b464-bd725f02e69a", 00:10:51.387 "is_configured": true, 00:10:51.387 "data_offset": 2048, 00:10:51.387 "data_size": 63488 00:10:51.387 }, 00:10:51.388 { 00:10:51.388 "name": "BaseBdev2", 00:10:51.388 "uuid": "8721c73f-8ad9-48b4-8b3e-55f7de12ae9a", 00:10:51.388 "is_configured": true, 00:10:51.388 "data_offset": 2048, 00:10:51.388 "data_size": 63488 00:10:51.388 }, 00:10:51.388 { 00:10:51.388 "name": "BaseBdev3", 00:10:51.388 "uuid": "1ee4caf8-000c-4bee-a181-d591d4ebca3e", 00:10:51.388 "is_configured": true, 00:10:51.388 "data_offset": 2048, 00:10:51.388 "data_size": 63488 00:10:51.388 } 00:10:51.388 ] 00:10:51.388 }' 00:10:51.388 03:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.388 03:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.955 [2024-11-05 03:22:05.380438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.955 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.955 "name": "Existed_Raid", 00:10:51.955 "aliases": [ 00:10:51.955 "374eccf0-91ea-4f70-8b0a-1f68e4347f5f" 00:10:51.955 ], 00:10:51.955 "product_name": "Raid Volume", 00:10:51.955 "block_size": 512, 00:10:51.955 "num_blocks": 63488, 00:10:51.955 "uuid": "374eccf0-91ea-4f70-8b0a-1f68e4347f5f", 00:10:51.955 "assigned_rate_limits": { 00:10:51.955 "rw_ios_per_sec": 0, 00:10:51.955 "rw_mbytes_per_sec": 0, 00:10:51.955 "r_mbytes_per_sec": 0, 00:10:51.955 "w_mbytes_per_sec": 0 00:10:51.955 }, 00:10:51.955 "claimed": false, 00:10:51.955 "zoned": false, 00:10:51.955 "supported_io_types": { 00:10:51.955 "read": true, 00:10:51.955 "write": true, 00:10:51.955 "unmap": false, 00:10:51.955 "flush": false, 00:10:51.955 "reset": true, 00:10:51.955 "nvme_admin": false, 00:10:51.955 "nvme_io": false, 00:10:51.955 "nvme_io_md": false, 00:10:51.955 "write_zeroes": true, 00:10:51.955 "zcopy": false, 00:10:51.955 "get_zone_info": false, 00:10:51.955 "zone_management": false, 00:10:51.955 "zone_append": false, 00:10:51.955 "compare": false, 00:10:51.955 "compare_and_write": false, 00:10:51.955 "abort": false, 00:10:51.955 "seek_hole": false, 00:10:51.955 "seek_data": false, 00:10:51.955 "copy": false, 00:10:51.955 "nvme_iov_md": false 00:10:51.955 }, 00:10:51.955 "memory_domains": [ 00:10:51.955 { 00:10:51.955 "dma_device_id": "system", 00:10:51.955 "dma_device_type": 1 00:10:51.955 }, 00:10:51.955 { 00:10:51.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.955 "dma_device_type": 2 00:10:51.955 }, 00:10:51.955 { 00:10:51.955 "dma_device_id": "system", 00:10:51.955 "dma_device_type": 1 00:10:51.956 }, 00:10:51.956 { 00:10:51.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.956 "dma_device_type": 2 00:10:51.956 }, 00:10:51.956 { 00:10:51.956 "dma_device_id": "system", 00:10:51.956 "dma_device_type": 1 00:10:51.956 }, 00:10:51.956 { 00:10:51.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.956 "dma_device_type": 2 00:10:51.956 } 00:10:51.956 ], 00:10:51.956 "driver_specific": { 00:10:51.956 "raid": { 00:10:51.956 "uuid": "374eccf0-91ea-4f70-8b0a-1f68e4347f5f", 00:10:51.956 "strip_size_kb": 0, 00:10:51.956 "state": "online", 00:10:51.956 "raid_level": "raid1", 00:10:51.956 "superblock": true, 00:10:51.956 "num_base_bdevs": 3, 00:10:51.956 "num_base_bdevs_discovered": 3, 00:10:51.956 "num_base_bdevs_operational": 3, 00:10:51.956 "base_bdevs_list": [ 00:10:51.956 { 00:10:51.956 "name": "BaseBdev1", 00:10:51.956 "uuid": "772dc9d9-043e-4ed0-b464-bd725f02e69a", 00:10:51.956 "is_configured": true, 00:10:51.956 "data_offset": 2048, 00:10:51.956 "data_size": 63488 00:10:51.956 }, 00:10:51.956 { 00:10:51.956 "name": "BaseBdev2", 00:10:51.956 "uuid": "8721c73f-8ad9-48b4-8b3e-55f7de12ae9a", 00:10:51.956 "is_configured": true, 00:10:51.956 "data_offset": 2048, 00:10:51.956 "data_size": 63488 00:10:51.956 }, 00:10:51.956 { 00:10:51.956 "name": "BaseBdev3", 00:10:51.956 "uuid": "1ee4caf8-000c-4bee-a181-d591d4ebca3e", 00:10:51.956 "is_configured": true, 00:10:51.956 "data_offset": 2048, 00:10:51.956 "data_size": 63488 00:10:51.956 } 00:10:51.956 ] 00:10:51.956 } 00:10:51.956 } 00:10:51.956 }' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:51.956 BaseBdev2 00:10:51.956 BaseBdev3' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.956 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.216 [2024-11-05 03:22:05.708158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.216 "name": "Existed_Raid", 00:10:52.216 "uuid": "374eccf0-91ea-4f70-8b0a-1f68e4347f5f", 00:10:52.216 "strip_size_kb": 0, 00:10:52.216 "state": "online", 00:10:52.216 "raid_level": "raid1", 00:10:52.216 "superblock": true, 00:10:52.216 "num_base_bdevs": 3, 00:10:52.216 "num_base_bdevs_discovered": 2, 00:10:52.216 "num_base_bdevs_operational": 2, 00:10:52.216 "base_bdevs_list": [ 00:10:52.216 { 00:10:52.216 "name": null, 00:10:52.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.216 "is_configured": false, 00:10:52.216 "data_offset": 0, 00:10:52.216 "data_size": 63488 00:10:52.216 }, 00:10:52.216 { 00:10:52.216 "name": "BaseBdev2", 00:10:52.216 "uuid": "8721c73f-8ad9-48b4-8b3e-55f7de12ae9a", 00:10:52.216 "is_configured": true, 00:10:52.216 "data_offset": 2048, 00:10:52.216 "data_size": 63488 00:10:52.216 }, 00:10:52.216 { 00:10:52.216 "name": "BaseBdev3", 00:10:52.216 "uuid": "1ee4caf8-000c-4bee-a181-d591d4ebca3e", 00:10:52.216 "is_configured": true, 00:10:52.216 "data_offset": 2048, 00:10:52.216 "data_size": 63488 00:10:52.216 } 00:10:52.216 ] 00:10:52.216 }' 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.216 03:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.784 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.784 [2024-11-05 03:22:06.373129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.060 [2024-11-05 03:22:06.506561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.060 [2024-11-05 03:22:06.506715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.060 [2024-11-05 03:22:06.582525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.060 [2024-11-05 03:22:06.582588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.060 [2024-11-05 03:22:06.582606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.060 BaseBdev2 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 [ 00:10:53.365 { 00:10:53.365 "name": "BaseBdev2", 00:10:53.365 "aliases": [ 00:10:53.365 "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a" 00:10:53.365 ], 00:10:53.365 "product_name": "Malloc disk", 00:10:53.365 "block_size": 512, 00:10:53.365 "num_blocks": 65536, 00:10:53.365 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:53.365 "assigned_rate_limits": { 00:10:53.365 "rw_ios_per_sec": 0, 00:10:53.365 "rw_mbytes_per_sec": 0, 00:10:53.365 "r_mbytes_per_sec": 0, 00:10:53.365 "w_mbytes_per_sec": 0 00:10:53.365 }, 00:10:53.365 "claimed": false, 00:10:53.365 "zoned": false, 00:10:53.365 "supported_io_types": { 00:10:53.365 "read": true, 00:10:53.365 "write": true, 00:10:53.365 "unmap": true, 00:10:53.365 "flush": true, 00:10:53.365 "reset": true, 00:10:53.365 "nvme_admin": false, 00:10:53.365 "nvme_io": false, 00:10:53.365 "nvme_io_md": false, 00:10:53.365 "write_zeroes": true, 00:10:53.365 "zcopy": true, 00:10:53.365 "get_zone_info": false, 00:10:53.365 "zone_management": false, 00:10:53.365 "zone_append": false, 00:10:53.365 "compare": false, 00:10:53.365 "compare_and_write": false, 00:10:53.365 "abort": true, 00:10:53.365 "seek_hole": false, 00:10:53.365 "seek_data": false, 00:10:53.365 "copy": true, 00:10:53.365 "nvme_iov_md": false 00:10:53.365 }, 00:10:53.365 "memory_domains": [ 00:10:53.365 { 00:10:53.365 "dma_device_id": "system", 00:10:53.365 "dma_device_type": 1 00:10:53.365 }, 00:10:53.365 { 00:10:53.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.365 "dma_device_type": 2 00:10:53.365 } 00:10:53.365 ], 00:10:53.365 "driver_specific": {} 00:10:53.365 } 00:10:53.365 ] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 BaseBdev3 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 [ 00:10:53.365 { 00:10:53.365 "name": "BaseBdev3", 00:10:53.365 "aliases": [ 00:10:53.365 "b3a4b3c7-1710-4805-ac11-38b85b1aaea2" 00:10:53.365 ], 00:10:53.365 "product_name": "Malloc disk", 00:10:53.365 "block_size": 512, 00:10:53.365 "num_blocks": 65536, 00:10:53.365 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:53.365 "assigned_rate_limits": { 00:10:53.365 "rw_ios_per_sec": 0, 00:10:53.365 "rw_mbytes_per_sec": 0, 00:10:53.365 "r_mbytes_per_sec": 0, 00:10:53.365 "w_mbytes_per_sec": 0 00:10:53.365 }, 00:10:53.365 "claimed": false, 00:10:53.365 "zoned": false, 00:10:53.365 "supported_io_types": { 00:10:53.365 "read": true, 00:10:53.365 "write": true, 00:10:53.365 "unmap": true, 00:10:53.365 "flush": true, 00:10:53.365 "reset": true, 00:10:53.365 "nvme_admin": false, 00:10:53.365 "nvme_io": false, 00:10:53.365 "nvme_io_md": false, 00:10:53.365 "write_zeroes": true, 00:10:53.365 "zcopy": true, 00:10:53.365 "get_zone_info": false, 00:10:53.365 "zone_management": false, 00:10:53.365 "zone_append": false, 00:10:53.365 "compare": false, 00:10:53.365 "compare_and_write": false, 00:10:53.365 "abort": true, 00:10:53.365 "seek_hole": false, 00:10:53.365 "seek_data": false, 00:10:53.365 "copy": true, 00:10:53.365 "nvme_iov_md": false 00:10:53.365 }, 00:10:53.365 "memory_domains": [ 00:10:53.365 { 00:10:53.365 "dma_device_id": "system", 00:10:53.365 "dma_device_type": 1 00:10:53.365 }, 00:10:53.365 { 00:10:53.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.365 "dma_device_type": 2 00:10:53.365 } 00:10:53.365 ], 00:10:53.365 "driver_specific": {} 00:10:53.365 } 00:10:53.365 ] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 [2024-11-05 03:22:06.788138] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.365 [2024-11-05 03:22:06.788212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.365 [2024-11-05 03:22:06.788236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.365 [2024-11-05 03:22:06.790824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.365 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.365 "name": "Existed_Raid", 00:10:53.366 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:53.366 "strip_size_kb": 0, 00:10:53.366 "state": "configuring", 00:10:53.366 "raid_level": "raid1", 00:10:53.366 "superblock": true, 00:10:53.366 "num_base_bdevs": 3, 00:10:53.366 "num_base_bdevs_discovered": 2, 00:10:53.366 "num_base_bdevs_operational": 3, 00:10:53.366 "base_bdevs_list": [ 00:10:53.366 { 00:10:53.366 "name": "BaseBdev1", 00:10:53.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.366 "is_configured": false, 00:10:53.366 "data_offset": 0, 00:10:53.366 "data_size": 0 00:10:53.366 }, 00:10:53.366 { 00:10:53.366 "name": "BaseBdev2", 00:10:53.366 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:53.366 "is_configured": true, 00:10:53.366 "data_offset": 2048, 00:10:53.366 "data_size": 63488 00:10:53.366 }, 00:10:53.366 { 00:10:53.366 "name": "BaseBdev3", 00:10:53.366 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:53.366 "is_configured": true, 00:10:53.366 "data_offset": 2048, 00:10:53.366 "data_size": 63488 00:10:53.366 } 00:10:53.366 ] 00:10:53.366 }' 00:10:53.366 03:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.366 03:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.933 [2024-11-05 03:22:07.328277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.933 "name": "Existed_Raid", 00:10:53.933 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:53.933 "strip_size_kb": 0, 00:10:53.933 "state": "configuring", 00:10:53.933 "raid_level": "raid1", 00:10:53.933 "superblock": true, 00:10:53.933 "num_base_bdevs": 3, 00:10:53.933 "num_base_bdevs_discovered": 1, 00:10:53.933 "num_base_bdevs_operational": 3, 00:10:53.933 "base_bdevs_list": [ 00:10:53.933 { 00:10:53.933 "name": "BaseBdev1", 00:10:53.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.933 "is_configured": false, 00:10:53.933 "data_offset": 0, 00:10:53.933 "data_size": 0 00:10:53.933 }, 00:10:53.933 { 00:10:53.933 "name": null, 00:10:53.933 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:53.933 "is_configured": false, 00:10:53.933 "data_offset": 0, 00:10:53.933 "data_size": 63488 00:10:53.933 }, 00:10:53.933 { 00:10:53.933 "name": "BaseBdev3", 00:10:53.933 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:53.933 "is_configured": true, 00:10:53.933 "data_offset": 2048, 00:10:53.933 "data_size": 63488 00:10:53.933 } 00:10:53.933 ] 00:10:53.933 }' 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.933 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.500 [2024-11-05 03:22:07.939568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.500 BaseBdev1 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.500 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 [ 00:10:54.501 { 00:10:54.501 "name": "BaseBdev1", 00:10:54.501 "aliases": [ 00:10:54.501 "75185015-a757-4a0c-9f24-be04286a5fd9" 00:10:54.501 ], 00:10:54.501 "product_name": "Malloc disk", 00:10:54.501 "block_size": 512, 00:10:54.501 "num_blocks": 65536, 00:10:54.501 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:54.501 "assigned_rate_limits": { 00:10:54.501 "rw_ios_per_sec": 0, 00:10:54.501 "rw_mbytes_per_sec": 0, 00:10:54.501 "r_mbytes_per_sec": 0, 00:10:54.501 "w_mbytes_per_sec": 0 00:10:54.501 }, 00:10:54.501 "claimed": true, 00:10:54.501 "claim_type": "exclusive_write", 00:10:54.501 "zoned": false, 00:10:54.501 "supported_io_types": { 00:10:54.501 "read": true, 00:10:54.501 "write": true, 00:10:54.501 "unmap": true, 00:10:54.501 "flush": true, 00:10:54.501 "reset": true, 00:10:54.501 "nvme_admin": false, 00:10:54.501 "nvme_io": false, 00:10:54.501 "nvme_io_md": false, 00:10:54.501 "write_zeroes": true, 00:10:54.501 "zcopy": true, 00:10:54.501 "get_zone_info": false, 00:10:54.501 "zone_management": false, 00:10:54.501 "zone_append": false, 00:10:54.501 "compare": false, 00:10:54.501 "compare_and_write": false, 00:10:54.501 "abort": true, 00:10:54.501 "seek_hole": false, 00:10:54.501 "seek_data": false, 00:10:54.501 "copy": true, 00:10:54.501 "nvme_iov_md": false 00:10:54.501 }, 00:10:54.501 "memory_domains": [ 00:10:54.501 { 00:10:54.501 "dma_device_id": "system", 00:10:54.501 "dma_device_type": 1 00:10:54.501 }, 00:10:54.501 { 00:10:54.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.501 "dma_device_type": 2 00:10:54.501 } 00:10:54.501 ], 00:10:54.501 "driver_specific": {} 00:10:54.501 } 00:10:54.501 ] 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 03:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.501 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.501 "name": "Existed_Raid", 00:10:54.501 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:54.501 "strip_size_kb": 0, 00:10:54.501 "state": "configuring", 00:10:54.501 "raid_level": "raid1", 00:10:54.501 "superblock": true, 00:10:54.501 "num_base_bdevs": 3, 00:10:54.501 "num_base_bdevs_discovered": 2, 00:10:54.501 "num_base_bdevs_operational": 3, 00:10:54.501 "base_bdevs_list": [ 00:10:54.501 { 00:10:54.501 "name": "BaseBdev1", 00:10:54.501 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:54.501 "is_configured": true, 00:10:54.501 "data_offset": 2048, 00:10:54.501 "data_size": 63488 00:10:54.501 }, 00:10:54.501 { 00:10:54.501 "name": null, 00:10:54.501 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:54.501 "is_configured": false, 00:10:54.501 "data_offset": 0, 00:10:54.501 "data_size": 63488 00:10:54.501 }, 00:10:54.501 { 00:10:54.501 "name": "BaseBdev3", 00:10:54.501 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:54.501 "is_configured": true, 00:10:54.501 "data_offset": 2048, 00:10:54.501 "data_size": 63488 00:10:54.501 } 00:10:54.501 ] 00:10:54.501 }' 00:10:54.501 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.501 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.069 [2024-11-05 03:22:08.567788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.069 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.070 "name": "Existed_Raid", 00:10:55.070 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:55.070 "strip_size_kb": 0, 00:10:55.070 "state": "configuring", 00:10:55.070 "raid_level": "raid1", 00:10:55.070 "superblock": true, 00:10:55.070 "num_base_bdevs": 3, 00:10:55.070 "num_base_bdevs_discovered": 1, 00:10:55.070 "num_base_bdevs_operational": 3, 00:10:55.070 "base_bdevs_list": [ 00:10:55.070 { 00:10:55.070 "name": "BaseBdev1", 00:10:55.070 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:55.070 "is_configured": true, 00:10:55.070 "data_offset": 2048, 00:10:55.070 "data_size": 63488 00:10:55.070 }, 00:10:55.070 { 00:10:55.070 "name": null, 00:10:55.070 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:55.070 "is_configured": false, 00:10:55.070 "data_offset": 0, 00:10:55.070 "data_size": 63488 00:10:55.070 }, 00:10:55.070 { 00:10:55.070 "name": null, 00:10:55.070 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:55.070 "is_configured": false, 00:10:55.070 "data_offset": 0, 00:10:55.070 "data_size": 63488 00:10:55.070 } 00:10:55.070 ] 00:10:55.070 }' 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.070 03:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.638 [2024-11-05 03:22:09.168038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.638 "name": "Existed_Raid", 00:10:55.638 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:55.638 "strip_size_kb": 0, 00:10:55.638 "state": "configuring", 00:10:55.638 "raid_level": "raid1", 00:10:55.638 "superblock": true, 00:10:55.638 "num_base_bdevs": 3, 00:10:55.638 "num_base_bdevs_discovered": 2, 00:10:55.638 "num_base_bdevs_operational": 3, 00:10:55.638 "base_bdevs_list": [ 00:10:55.638 { 00:10:55.638 "name": "BaseBdev1", 00:10:55.638 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:55.638 "is_configured": true, 00:10:55.638 "data_offset": 2048, 00:10:55.638 "data_size": 63488 00:10:55.638 }, 00:10:55.638 { 00:10:55.638 "name": null, 00:10:55.638 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:55.638 "is_configured": false, 00:10:55.638 "data_offset": 0, 00:10:55.638 "data_size": 63488 00:10:55.638 }, 00:10:55.638 { 00:10:55.638 "name": "BaseBdev3", 00:10:55.638 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:55.638 "is_configured": true, 00:10:55.638 "data_offset": 2048, 00:10:55.638 "data_size": 63488 00:10:55.638 } 00:10:55.638 ] 00:10:55.638 }' 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.638 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.205 [2024-11-05 03:22:09.740227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.205 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.463 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.463 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.463 "name": "Existed_Raid", 00:10:56.463 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:56.463 "strip_size_kb": 0, 00:10:56.463 "state": "configuring", 00:10:56.463 "raid_level": "raid1", 00:10:56.463 "superblock": true, 00:10:56.463 "num_base_bdevs": 3, 00:10:56.463 "num_base_bdevs_discovered": 1, 00:10:56.463 "num_base_bdevs_operational": 3, 00:10:56.463 "base_bdevs_list": [ 00:10:56.463 { 00:10:56.463 "name": null, 00:10:56.463 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:56.463 "is_configured": false, 00:10:56.463 "data_offset": 0, 00:10:56.463 "data_size": 63488 00:10:56.463 }, 00:10:56.463 { 00:10:56.463 "name": null, 00:10:56.463 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:56.463 "is_configured": false, 00:10:56.463 "data_offset": 0, 00:10:56.463 "data_size": 63488 00:10:56.463 }, 00:10:56.463 { 00:10:56.463 "name": "BaseBdev3", 00:10:56.463 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:56.463 "is_configured": true, 00:10:56.463 "data_offset": 2048, 00:10:56.463 "data_size": 63488 00:10:56.463 } 00:10:56.463 ] 00:10:56.463 }' 00:10:56.463 03:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.463 03:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.722 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.722 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.722 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.722 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.980 [2024-11-05 03:22:10.404090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.980 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.981 "name": "Existed_Raid", 00:10:56.981 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:56.981 "strip_size_kb": 0, 00:10:56.981 "state": "configuring", 00:10:56.981 "raid_level": "raid1", 00:10:56.981 "superblock": true, 00:10:56.981 "num_base_bdevs": 3, 00:10:56.981 "num_base_bdevs_discovered": 2, 00:10:56.981 "num_base_bdevs_operational": 3, 00:10:56.981 "base_bdevs_list": [ 00:10:56.981 { 00:10:56.981 "name": null, 00:10:56.981 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:56.981 "is_configured": false, 00:10:56.981 "data_offset": 0, 00:10:56.981 "data_size": 63488 00:10:56.981 }, 00:10:56.981 { 00:10:56.981 "name": "BaseBdev2", 00:10:56.981 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:56.981 "is_configured": true, 00:10:56.981 "data_offset": 2048, 00:10:56.981 "data_size": 63488 00:10:56.981 }, 00:10:56.981 { 00:10:56.981 "name": "BaseBdev3", 00:10:56.981 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:56.981 "is_configured": true, 00:10:56.981 "data_offset": 2048, 00:10:56.981 "data_size": 63488 00:10:56.981 } 00:10:56.981 ] 00:10:56.981 }' 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.981 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.549 03:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 75185015-a757-4a0c-9f24-be04286a5fd9 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.549 [2024-11-05 03:22:11.086898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:57.549 [2024-11-05 03:22:11.087144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:57.549 [2024-11-05 03:22:11.087161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:57.549 [2024-11-05 03:22:11.087529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:57.549 [2024-11-05 03:22:11.087743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:57.549 [2024-11-05 03:22:11.087780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:57.549 NewBaseBdev 00:10:57.549 [2024-11-05 03:22:11.087938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.549 [ 00:10:57.549 { 00:10:57.549 "name": "NewBaseBdev", 00:10:57.549 "aliases": [ 00:10:57.549 "75185015-a757-4a0c-9f24-be04286a5fd9" 00:10:57.549 ], 00:10:57.549 "product_name": "Malloc disk", 00:10:57.549 "block_size": 512, 00:10:57.549 "num_blocks": 65536, 00:10:57.549 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:57.549 "assigned_rate_limits": { 00:10:57.549 "rw_ios_per_sec": 0, 00:10:57.549 "rw_mbytes_per_sec": 0, 00:10:57.549 "r_mbytes_per_sec": 0, 00:10:57.549 "w_mbytes_per_sec": 0 00:10:57.549 }, 00:10:57.549 "claimed": true, 00:10:57.549 "claim_type": "exclusive_write", 00:10:57.549 "zoned": false, 00:10:57.549 "supported_io_types": { 00:10:57.549 "read": true, 00:10:57.549 "write": true, 00:10:57.549 "unmap": true, 00:10:57.549 "flush": true, 00:10:57.549 "reset": true, 00:10:57.549 "nvme_admin": false, 00:10:57.549 "nvme_io": false, 00:10:57.549 "nvme_io_md": false, 00:10:57.549 "write_zeroes": true, 00:10:57.549 "zcopy": true, 00:10:57.549 "get_zone_info": false, 00:10:57.549 "zone_management": false, 00:10:57.549 "zone_append": false, 00:10:57.549 "compare": false, 00:10:57.549 "compare_and_write": false, 00:10:57.549 "abort": true, 00:10:57.549 "seek_hole": false, 00:10:57.549 "seek_data": false, 00:10:57.549 "copy": true, 00:10:57.549 "nvme_iov_md": false 00:10:57.549 }, 00:10:57.549 "memory_domains": [ 00:10:57.549 { 00:10:57.549 "dma_device_id": "system", 00:10:57.549 "dma_device_type": 1 00:10:57.549 }, 00:10:57.549 { 00:10:57.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.549 "dma_device_type": 2 00:10:57.549 } 00:10:57.549 ], 00:10:57.549 "driver_specific": {} 00:10:57.549 } 00:10:57.549 ] 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.549 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.549 "name": "Existed_Raid", 00:10:57.549 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:57.549 "strip_size_kb": 0, 00:10:57.549 "state": "online", 00:10:57.549 "raid_level": "raid1", 00:10:57.549 "superblock": true, 00:10:57.549 "num_base_bdevs": 3, 00:10:57.549 "num_base_bdevs_discovered": 3, 00:10:57.549 "num_base_bdevs_operational": 3, 00:10:57.549 "base_bdevs_list": [ 00:10:57.549 { 00:10:57.549 "name": "NewBaseBdev", 00:10:57.549 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:57.549 "is_configured": true, 00:10:57.549 "data_offset": 2048, 00:10:57.549 "data_size": 63488 00:10:57.549 }, 00:10:57.549 { 00:10:57.550 "name": "BaseBdev2", 00:10:57.550 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:57.550 "is_configured": true, 00:10:57.550 "data_offset": 2048, 00:10:57.550 "data_size": 63488 00:10:57.550 }, 00:10:57.550 { 00:10:57.550 "name": "BaseBdev3", 00:10:57.550 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:57.550 "is_configured": true, 00:10:57.550 "data_offset": 2048, 00:10:57.550 "data_size": 63488 00:10:57.550 } 00:10:57.550 ] 00:10:57.550 }' 00:10:57.550 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.550 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.118 [2024-11-05 03:22:11.655517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.118 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.118 "name": "Existed_Raid", 00:10:58.118 "aliases": [ 00:10:58.118 "d1453d13-80ad-4be8-be10-efee11bfebd2" 00:10:58.118 ], 00:10:58.118 "product_name": "Raid Volume", 00:10:58.118 "block_size": 512, 00:10:58.118 "num_blocks": 63488, 00:10:58.118 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:58.118 "assigned_rate_limits": { 00:10:58.118 "rw_ios_per_sec": 0, 00:10:58.118 "rw_mbytes_per_sec": 0, 00:10:58.118 "r_mbytes_per_sec": 0, 00:10:58.118 "w_mbytes_per_sec": 0 00:10:58.118 }, 00:10:58.118 "claimed": false, 00:10:58.118 "zoned": false, 00:10:58.118 "supported_io_types": { 00:10:58.118 "read": true, 00:10:58.118 "write": true, 00:10:58.118 "unmap": false, 00:10:58.118 "flush": false, 00:10:58.118 "reset": true, 00:10:58.118 "nvme_admin": false, 00:10:58.118 "nvme_io": false, 00:10:58.118 "nvme_io_md": false, 00:10:58.118 "write_zeroes": true, 00:10:58.118 "zcopy": false, 00:10:58.118 "get_zone_info": false, 00:10:58.118 "zone_management": false, 00:10:58.118 "zone_append": false, 00:10:58.118 "compare": false, 00:10:58.118 "compare_and_write": false, 00:10:58.118 "abort": false, 00:10:58.118 "seek_hole": false, 00:10:58.118 "seek_data": false, 00:10:58.118 "copy": false, 00:10:58.118 "nvme_iov_md": false 00:10:58.118 }, 00:10:58.118 "memory_domains": [ 00:10:58.118 { 00:10:58.118 "dma_device_id": "system", 00:10:58.118 "dma_device_type": 1 00:10:58.118 }, 00:10:58.118 { 00:10:58.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.118 "dma_device_type": 2 00:10:58.118 }, 00:10:58.118 { 00:10:58.118 "dma_device_id": "system", 00:10:58.118 "dma_device_type": 1 00:10:58.118 }, 00:10:58.118 { 00:10:58.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.118 "dma_device_type": 2 00:10:58.118 }, 00:10:58.118 { 00:10:58.118 "dma_device_id": "system", 00:10:58.118 "dma_device_type": 1 00:10:58.118 }, 00:10:58.118 { 00:10:58.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.118 "dma_device_type": 2 00:10:58.118 } 00:10:58.118 ], 00:10:58.118 "driver_specific": { 00:10:58.118 "raid": { 00:10:58.118 "uuid": "d1453d13-80ad-4be8-be10-efee11bfebd2", 00:10:58.118 "strip_size_kb": 0, 00:10:58.118 "state": "online", 00:10:58.118 "raid_level": "raid1", 00:10:58.118 "superblock": true, 00:10:58.118 "num_base_bdevs": 3, 00:10:58.118 "num_base_bdevs_discovered": 3, 00:10:58.118 "num_base_bdevs_operational": 3, 00:10:58.118 "base_bdevs_list": [ 00:10:58.118 { 00:10:58.118 "name": "NewBaseBdev", 00:10:58.118 "uuid": "75185015-a757-4a0c-9f24-be04286a5fd9", 00:10:58.118 "is_configured": true, 00:10:58.118 "data_offset": 2048, 00:10:58.118 "data_size": 63488 00:10:58.119 }, 00:10:58.119 { 00:10:58.119 "name": "BaseBdev2", 00:10:58.119 "uuid": "1f6ca4b4-534f-46ca-bea5-dea7ad5d444a", 00:10:58.119 "is_configured": true, 00:10:58.119 "data_offset": 2048, 00:10:58.119 "data_size": 63488 00:10:58.119 }, 00:10:58.119 { 00:10:58.119 "name": "BaseBdev3", 00:10:58.119 "uuid": "b3a4b3c7-1710-4805-ac11-38b85b1aaea2", 00:10:58.119 "is_configured": true, 00:10:58.119 "data_offset": 2048, 00:10:58.119 "data_size": 63488 00:10:58.119 } 00:10:58.119 ] 00:10:58.119 } 00:10:58.119 } 00:10:58.119 }' 00:10:58.119 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.119 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:58.119 BaseBdev2 00:10:58.119 BaseBdev3' 00:10:58.119 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.378 [2024-11-05 03:22:11.967176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.378 [2024-11-05 03:22:11.967231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.378 [2024-11-05 03:22:11.967301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.378 [2024-11-05 03:22:11.967755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.378 [2024-11-05 03:22:11.967772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67842 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 67842 ']' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 67842 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:58.378 03:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67842 00:10:58.378 03:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:58.378 killing process with pid 67842 00:10:58.378 03:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:58.378 03:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67842' 00:10:58.378 03:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 67842 00:10:58.378 [2024-11-05 03:22:12.004847] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.378 03:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 67842 00:10:58.637 [2024-11-05 03:22:12.257184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.574 03:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:59.574 00:10:59.574 real 0m11.646s 00:10:59.574 user 0m19.534s 00:10:59.574 sys 0m1.550s 00:10:59.574 03:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.574 ************************************ 00:10:59.574 END TEST raid_state_function_test_sb 00:10:59.574 ************************************ 00:10:59.574 03:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.833 03:22:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:59.833 03:22:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:59.833 03:22:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.833 03:22:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.833 ************************************ 00:10:59.833 START TEST raid_superblock_test 00:10:59.833 ************************************ 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:59.833 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68481 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68481 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68481 ']' 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.834 03:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.834 [2024-11-05 03:22:13.346326] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:59.834 [2024-11-05 03:22:13.346533] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68481 ] 00:11:00.094 [2024-11-05 03:22:13.517301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.094 [2024-11-05 03:22:13.639239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.353 [2024-11-05 03:22:13.834190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.353 [2024-11-05 03:22:13.834253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.008 malloc1 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.008 [2024-11-05 03:22:14.389241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:01.008 [2024-11-05 03:22:14.389373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.008 [2024-11-05 03:22:14.389436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:01.008 [2024-11-05 03:22:14.389459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.008 [2024-11-05 03:22:14.392172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.008 [2024-11-05 03:22:14.392215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:01.008 pt1 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:01.008 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.009 malloc2 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.009 [2024-11-05 03:22:14.437940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.009 [2024-11-05 03:22:14.438018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.009 [2024-11-05 03:22:14.438061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:01.009 [2024-11-05 03:22:14.438074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.009 [2024-11-05 03:22:14.441083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.009 [2024-11-05 03:22:14.441124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.009 pt2 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.009 malloc3 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.009 [2024-11-05 03:22:14.496948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:01.009 [2024-11-05 03:22:14.497028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.009 [2024-11-05 03:22:14.497059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:01.009 [2024-11-05 03:22:14.497074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.009 [2024-11-05 03:22:14.499917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.009 [2024-11-05 03:22:14.499960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:01.009 pt3 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.009 [2024-11-05 03:22:14.505010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.009 [2024-11-05 03:22:14.507614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.009 [2024-11-05 03:22:14.507731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:01.009 [2024-11-05 03:22:14.507957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:01.009 [2024-11-05 03:22:14.507983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.009 [2024-11-05 03:22:14.508275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:01.009 [2024-11-05 03:22:14.508703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:01.009 [2024-11-05 03:22:14.508851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:01.009 [2024-11-05 03:22:14.509257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.009 "name": "raid_bdev1", 00:11:01.009 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:01.009 "strip_size_kb": 0, 00:11:01.009 "state": "online", 00:11:01.009 "raid_level": "raid1", 00:11:01.009 "superblock": true, 00:11:01.009 "num_base_bdevs": 3, 00:11:01.009 "num_base_bdevs_discovered": 3, 00:11:01.009 "num_base_bdevs_operational": 3, 00:11:01.009 "base_bdevs_list": [ 00:11:01.009 { 00:11:01.009 "name": "pt1", 00:11:01.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.009 "is_configured": true, 00:11:01.009 "data_offset": 2048, 00:11:01.009 "data_size": 63488 00:11:01.009 }, 00:11:01.009 { 00:11:01.009 "name": "pt2", 00:11:01.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.009 "is_configured": true, 00:11:01.009 "data_offset": 2048, 00:11:01.009 "data_size": 63488 00:11:01.009 }, 00:11:01.009 { 00:11:01.009 "name": "pt3", 00:11:01.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.009 "is_configured": true, 00:11:01.009 "data_offset": 2048, 00:11:01.009 "data_size": 63488 00:11:01.009 } 00:11:01.009 ] 00:11:01.009 }' 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.009 03:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.577 [2024-11-05 03:22:15.025757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.577 "name": "raid_bdev1", 00:11:01.577 "aliases": [ 00:11:01.577 "fbb58564-0672-406a-8226-904fa0395978" 00:11:01.577 ], 00:11:01.577 "product_name": "Raid Volume", 00:11:01.577 "block_size": 512, 00:11:01.577 "num_blocks": 63488, 00:11:01.577 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:01.577 "assigned_rate_limits": { 00:11:01.577 "rw_ios_per_sec": 0, 00:11:01.577 "rw_mbytes_per_sec": 0, 00:11:01.577 "r_mbytes_per_sec": 0, 00:11:01.577 "w_mbytes_per_sec": 0 00:11:01.577 }, 00:11:01.577 "claimed": false, 00:11:01.577 "zoned": false, 00:11:01.577 "supported_io_types": { 00:11:01.577 "read": true, 00:11:01.577 "write": true, 00:11:01.577 "unmap": false, 00:11:01.577 "flush": false, 00:11:01.577 "reset": true, 00:11:01.577 "nvme_admin": false, 00:11:01.577 "nvme_io": false, 00:11:01.577 "nvme_io_md": false, 00:11:01.577 "write_zeroes": true, 00:11:01.577 "zcopy": false, 00:11:01.577 "get_zone_info": false, 00:11:01.577 "zone_management": false, 00:11:01.577 "zone_append": false, 00:11:01.577 "compare": false, 00:11:01.577 "compare_and_write": false, 00:11:01.577 "abort": false, 00:11:01.577 "seek_hole": false, 00:11:01.577 "seek_data": false, 00:11:01.577 "copy": false, 00:11:01.577 "nvme_iov_md": false 00:11:01.577 }, 00:11:01.577 "memory_domains": [ 00:11:01.577 { 00:11:01.577 "dma_device_id": "system", 00:11:01.577 "dma_device_type": 1 00:11:01.577 }, 00:11:01.577 { 00:11:01.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.577 "dma_device_type": 2 00:11:01.577 }, 00:11:01.577 { 00:11:01.577 "dma_device_id": "system", 00:11:01.577 "dma_device_type": 1 00:11:01.577 }, 00:11:01.577 { 00:11:01.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.577 "dma_device_type": 2 00:11:01.577 }, 00:11:01.577 { 00:11:01.577 "dma_device_id": "system", 00:11:01.577 "dma_device_type": 1 00:11:01.577 }, 00:11:01.577 { 00:11:01.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.577 "dma_device_type": 2 00:11:01.577 } 00:11:01.577 ], 00:11:01.577 "driver_specific": { 00:11:01.577 "raid": { 00:11:01.577 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:01.577 "strip_size_kb": 0, 00:11:01.577 "state": "online", 00:11:01.577 "raid_level": "raid1", 00:11:01.577 "superblock": true, 00:11:01.577 "num_base_bdevs": 3, 00:11:01.577 "num_base_bdevs_discovered": 3, 00:11:01.577 "num_base_bdevs_operational": 3, 00:11:01.577 "base_bdevs_list": [ 00:11:01.577 { 00:11:01.577 "name": "pt1", 00:11:01.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.577 "is_configured": true, 00:11:01.577 "data_offset": 2048, 00:11:01.577 "data_size": 63488 00:11:01.577 }, 00:11:01.577 { 00:11:01.577 "name": "pt2", 00:11:01.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.577 "is_configured": true, 00:11:01.577 "data_offset": 2048, 00:11:01.577 "data_size": 63488 00:11:01.577 }, 00:11:01.577 { 00:11:01.577 "name": "pt3", 00:11:01.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.577 "is_configured": true, 00:11:01.577 "data_offset": 2048, 00:11:01.577 "data_size": 63488 00:11:01.577 } 00:11:01.577 ] 00:11:01.577 } 00:11:01.577 } 00:11:01.577 }' 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.577 pt2 00:11:01.577 pt3' 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.577 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:01.836 [2024-11-05 03:22:15.349798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fbb58564-0672-406a-8226-904fa0395978 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fbb58564-0672-406a-8226-904fa0395978 ']' 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.836 [2024-11-05 03:22:15.397452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.836 [2024-11-05 03:22:15.397488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.836 [2024-11-05 03:22:15.397584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.836 [2024-11-05 03:22:15.397723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.836 [2024-11-05 03:22:15.397741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.836 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.837 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.096 [2024-11-05 03:22:15.545559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:02.096 [2024-11-05 03:22:15.548060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:02.096 [2024-11-05 03:22:15.548128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:02.096 [2024-11-05 03:22:15.548196] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:02.096 [2024-11-05 03:22:15.548314] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:02.096 [2024-11-05 03:22:15.548396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:02.096 [2024-11-05 03:22:15.548427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.096 [2024-11-05 03:22:15.548442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:02.096 request: 00:11:02.096 { 00:11:02.096 "name": "raid_bdev1", 00:11:02.096 "raid_level": "raid1", 00:11:02.096 "base_bdevs": [ 00:11:02.096 "malloc1", 00:11:02.096 "malloc2", 00:11:02.096 "malloc3" 00:11:02.096 ], 00:11:02.096 "superblock": false, 00:11:02.096 "method": "bdev_raid_create", 00:11:02.096 "req_id": 1 00:11:02.096 } 00:11:02.096 Got JSON-RPC error response 00:11:02.096 response: 00:11:02.096 { 00:11:02.096 "code": -17, 00:11:02.096 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:02.096 } 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.096 [2024-11-05 03:22:15.617481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:02.096 [2024-11-05 03:22:15.617741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.096 [2024-11-05 03:22:15.617821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:02.096 [2024-11-05 03:22:15.618060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.096 [2024-11-05 03:22:15.620951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.096 [2024-11-05 03:22:15.621133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:02.096 [2024-11-05 03:22:15.621248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:02.096 [2024-11-05 03:22:15.621358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:02.096 pt1 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.096 "name": "raid_bdev1", 00:11:02.096 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:02.096 "strip_size_kb": 0, 00:11:02.096 "state": "configuring", 00:11:02.096 "raid_level": "raid1", 00:11:02.096 "superblock": true, 00:11:02.096 "num_base_bdevs": 3, 00:11:02.096 "num_base_bdevs_discovered": 1, 00:11:02.096 "num_base_bdevs_operational": 3, 00:11:02.096 "base_bdevs_list": [ 00:11:02.096 { 00:11:02.096 "name": "pt1", 00:11:02.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.096 "is_configured": true, 00:11:02.096 "data_offset": 2048, 00:11:02.096 "data_size": 63488 00:11:02.096 }, 00:11:02.096 { 00:11:02.096 "name": null, 00:11:02.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.096 "is_configured": false, 00:11:02.096 "data_offset": 2048, 00:11:02.096 "data_size": 63488 00:11:02.096 }, 00:11:02.096 { 00:11:02.096 "name": null, 00:11:02.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.096 "is_configured": false, 00:11:02.096 "data_offset": 2048, 00:11:02.096 "data_size": 63488 00:11:02.096 } 00:11:02.096 ] 00:11:02.096 }' 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.096 03:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 [2024-11-05 03:22:16.161793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.666 [2024-11-05 03:22:16.161868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.666 [2024-11-05 03:22:16.161903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:02.666 [2024-11-05 03:22:16.161919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.666 [2024-11-05 03:22:16.162502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.666 [2024-11-05 03:22:16.162542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.666 [2024-11-05 03:22:16.162657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:02.666 [2024-11-05 03:22:16.162690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.666 pt2 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 [2024-11-05 03:22:16.169771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.666 "name": "raid_bdev1", 00:11:02.666 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:02.666 "strip_size_kb": 0, 00:11:02.666 "state": "configuring", 00:11:02.666 "raid_level": "raid1", 00:11:02.666 "superblock": true, 00:11:02.666 "num_base_bdevs": 3, 00:11:02.666 "num_base_bdevs_discovered": 1, 00:11:02.666 "num_base_bdevs_operational": 3, 00:11:02.666 "base_bdevs_list": [ 00:11:02.666 { 00:11:02.666 "name": "pt1", 00:11:02.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.666 "is_configured": true, 00:11:02.666 "data_offset": 2048, 00:11:02.666 "data_size": 63488 00:11:02.666 }, 00:11:02.666 { 00:11:02.666 "name": null, 00:11:02.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.666 "is_configured": false, 00:11:02.666 "data_offset": 0, 00:11:02.666 "data_size": 63488 00:11:02.666 }, 00:11:02.666 { 00:11:02.666 "name": null, 00:11:02.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.666 "is_configured": false, 00:11:02.666 "data_offset": 2048, 00:11:02.666 "data_size": 63488 00:11:02.666 } 00:11:02.666 ] 00:11:02.666 }' 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.666 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.235 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:03.235 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:03.235 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.236 [2024-11-05 03:22:16.690130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:03.236 [2024-11-05 03:22:16.690414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.236 [2024-11-05 03:22:16.690451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:03.236 [2024-11-05 03:22:16.690471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.236 [2024-11-05 03:22:16.691069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.236 [2024-11-05 03:22:16.691099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:03.236 [2024-11-05 03:22:16.691197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:03.236 [2024-11-05 03:22:16.691248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:03.236 pt2 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.236 [2024-11-05 03:22:16.702121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:03.236 [2024-11-05 03:22:16.702203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.236 [2024-11-05 03:22:16.702262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.236 [2024-11-05 03:22:16.702297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.236 [2024-11-05 03:22:16.702837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.236 [2024-11-05 03:22:16.702875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:03.236 [2024-11-05 03:22:16.702990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:03.236 [2024-11-05 03:22:16.703025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:03.236 [2024-11-05 03:22:16.703186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:03.236 [2024-11-05 03:22:16.703216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:03.236 [2024-11-05 03:22:16.703567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:03.236 [2024-11-05 03:22:16.703794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:03.236 [2024-11-05 03:22:16.703815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:03.236 [2024-11-05 03:22:16.703999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.236 pt3 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.236 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.236 "name": "raid_bdev1", 00:11:03.236 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:03.236 "strip_size_kb": 0, 00:11:03.236 "state": "online", 00:11:03.236 "raid_level": "raid1", 00:11:03.236 "superblock": true, 00:11:03.236 "num_base_bdevs": 3, 00:11:03.236 "num_base_bdevs_discovered": 3, 00:11:03.236 "num_base_bdevs_operational": 3, 00:11:03.236 "base_bdevs_list": [ 00:11:03.236 { 00:11:03.236 "name": "pt1", 00:11:03.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.236 "is_configured": true, 00:11:03.236 "data_offset": 2048, 00:11:03.236 "data_size": 63488 00:11:03.236 }, 00:11:03.236 { 00:11:03.236 "name": "pt2", 00:11:03.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.236 "is_configured": true, 00:11:03.236 "data_offset": 2048, 00:11:03.236 "data_size": 63488 00:11:03.236 }, 00:11:03.236 { 00:11:03.236 "name": "pt3", 00:11:03.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.236 "is_configured": true, 00:11:03.236 "data_offset": 2048, 00:11:03.236 "data_size": 63488 00:11:03.237 } 00:11:03.237 ] 00:11:03.237 }' 00:11:03.237 03:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.237 03:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.805 [2024-11-05 03:22:17.242744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.805 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.805 "name": "raid_bdev1", 00:11:03.805 "aliases": [ 00:11:03.805 "fbb58564-0672-406a-8226-904fa0395978" 00:11:03.805 ], 00:11:03.805 "product_name": "Raid Volume", 00:11:03.805 "block_size": 512, 00:11:03.805 "num_blocks": 63488, 00:11:03.805 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:03.805 "assigned_rate_limits": { 00:11:03.805 "rw_ios_per_sec": 0, 00:11:03.805 "rw_mbytes_per_sec": 0, 00:11:03.805 "r_mbytes_per_sec": 0, 00:11:03.805 "w_mbytes_per_sec": 0 00:11:03.805 }, 00:11:03.805 "claimed": false, 00:11:03.805 "zoned": false, 00:11:03.805 "supported_io_types": { 00:11:03.805 "read": true, 00:11:03.805 "write": true, 00:11:03.805 "unmap": false, 00:11:03.805 "flush": false, 00:11:03.805 "reset": true, 00:11:03.805 "nvme_admin": false, 00:11:03.805 "nvme_io": false, 00:11:03.805 "nvme_io_md": false, 00:11:03.805 "write_zeroes": true, 00:11:03.805 "zcopy": false, 00:11:03.805 "get_zone_info": false, 00:11:03.805 "zone_management": false, 00:11:03.805 "zone_append": false, 00:11:03.805 "compare": false, 00:11:03.805 "compare_and_write": false, 00:11:03.805 "abort": false, 00:11:03.805 "seek_hole": false, 00:11:03.805 "seek_data": false, 00:11:03.805 "copy": false, 00:11:03.805 "nvme_iov_md": false 00:11:03.805 }, 00:11:03.805 "memory_domains": [ 00:11:03.805 { 00:11:03.805 "dma_device_id": "system", 00:11:03.805 "dma_device_type": 1 00:11:03.805 }, 00:11:03.805 { 00:11:03.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.805 "dma_device_type": 2 00:11:03.805 }, 00:11:03.805 { 00:11:03.805 "dma_device_id": "system", 00:11:03.805 "dma_device_type": 1 00:11:03.805 }, 00:11:03.805 { 00:11:03.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.805 "dma_device_type": 2 00:11:03.805 }, 00:11:03.805 { 00:11:03.805 "dma_device_id": "system", 00:11:03.805 "dma_device_type": 1 00:11:03.805 }, 00:11:03.805 { 00:11:03.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.805 "dma_device_type": 2 00:11:03.805 } 00:11:03.805 ], 00:11:03.805 "driver_specific": { 00:11:03.805 "raid": { 00:11:03.805 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:03.805 "strip_size_kb": 0, 00:11:03.805 "state": "online", 00:11:03.805 "raid_level": "raid1", 00:11:03.805 "superblock": true, 00:11:03.805 "num_base_bdevs": 3, 00:11:03.805 "num_base_bdevs_discovered": 3, 00:11:03.805 "num_base_bdevs_operational": 3, 00:11:03.805 "base_bdevs_list": [ 00:11:03.805 { 00:11:03.805 "name": "pt1", 00:11:03.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.805 "is_configured": true, 00:11:03.805 "data_offset": 2048, 00:11:03.805 "data_size": 63488 00:11:03.805 }, 00:11:03.805 { 00:11:03.805 "name": "pt2", 00:11:03.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.806 "is_configured": true, 00:11:03.806 "data_offset": 2048, 00:11:03.806 "data_size": 63488 00:11:03.806 }, 00:11:03.806 { 00:11:03.806 "name": "pt3", 00:11:03.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.806 "is_configured": true, 00:11:03.806 "data_offset": 2048, 00:11:03.806 "data_size": 63488 00:11:03.806 } 00:11:03.806 ] 00:11:03.806 } 00:11:03.806 } 00:11:03.806 }' 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.806 pt2 00:11:03.806 pt3' 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.806 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:04.065 [2024-11-05 03:22:17.566774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fbb58564-0672-406a-8226-904fa0395978 '!=' fbb58564-0672-406a-8226-904fa0395978 ']' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 [2024-11-05 03:22:17.622455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.065 "name": "raid_bdev1", 00:11:04.065 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:04.065 "strip_size_kb": 0, 00:11:04.065 "state": "online", 00:11:04.065 "raid_level": "raid1", 00:11:04.065 "superblock": true, 00:11:04.065 "num_base_bdevs": 3, 00:11:04.065 "num_base_bdevs_discovered": 2, 00:11:04.065 "num_base_bdevs_operational": 2, 00:11:04.065 "base_bdevs_list": [ 00:11:04.065 { 00:11:04.065 "name": null, 00:11:04.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.065 "is_configured": false, 00:11:04.065 "data_offset": 0, 00:11:04.065 "data_size": 63488 00:11:04.065 }, 00:11:04.065 { 00:11:04.065 "name": "pt2", 00:11:04.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.065 "is_configured": true, 00:11:04.065 "data_offset": 2048, 00:11:04.065 "data_size": 63488 00:11:04.065 }, 00:11:04.065 { 00:11:04.065 "name": "pt3", 00:11:04.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.065 "is_configured": true, 00:11:04.065 "data_offset": 2048, 00:11:04.065 "data_size": 63488 00:11:04.065 } 00:11:04.065 ] 00:11:04.065 }' 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.065 03:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.632 [2024-11-05 03:22:18.150559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.632 [2024-11-05 03:22:18.150594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.632 [2024-11-05 03:22:18.150702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.632 [2024-11-05 03:22:18.150790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.632 [2024-11-05 03:22:18.150812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:04.632 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.633 [2024-11-05 03:22:18.234536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.633 [2024-11-05 03:22:18.234606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.633 [2024-11-05 03:22:18.234631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:04.633 [2024-11-05 03:22:18.234649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.633 [2024-11-05 03:22:18.237381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.633 [2024-11-05 03:22:18.237552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.633 [2024-11-05 03:22:18.237660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.633 [2024-11-05 03:22:18.237736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.633 pt2 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.633 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.891 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.891 "name": "raid_bdev1", 00:11:04.891 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:04.891 "strip_size_kb": 0, 00:11:04.891 "state": "configuring", 00:11:04.891 "raid_level": "raid1", 00:11:04.891 "superblock": true, 00:11:04.891 "num_base_bdevs": 3, 00:11:04.891 "num_base_bdevs_discovered": 1, 00:11:04.891 "num_base_bdevs_operational": 2, 00:11:04.891 "base_bdevs_list": [ 00:11:04.891 { 00:11:04.891 "name": null, 00:11:04.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.891 "is_configured": false, 00:11:04.891 "data_offset": 2048, 00:11:04.891 "data_size": 63488 00:11:04.891 }, 00:11:04.891 { 00:11:04.891 "name": "pt2", 00:11:04.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.891 "is_configured": true, 00:11:04.891 "data_offset": 2048, 00:11:04.891 "data_size": 63488 00:11:04.891 }, 00:11:04.891 { 00:11:04.891 "name": null, 00:11:04.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.891 "is_configured": false, 00:11:04.891 "data_offset": 2048, 00:11:04.891 "data_size": 63488 00:11:04.891 } 00:11:04.891 ] 00:11:04.891 }' 00:11:04.891 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.891 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.201 [2024-11-05 03:22:18.742713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.201 [2024-11-05 03:22:18.742931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.201 [2024-11-05 03:22:18.742970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:05.201 [2024-11-05 03:22:18.742990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.201 [2024-11-05 03:22:18.743558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.201 [2024-11-05 03:22:18.743590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.201 [2024-11-05 03:22:18.743700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:05.201 [2024-11-05 03:22:18.743741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.201 [2024-11-05 03:22:18.743881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.201 [2024-11-05 03:22:18.743902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:05.201 [2024-11-05 03:22:18.744229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:05.201 [2024-11-05 03:22:18.744443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.201 [2024-11-05 03:22:18.744459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:05.201 [2024-11-05 03:22:18.744629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.201 pt3 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.201 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.201 "name": "raid_bdev1", 00:11:05.201 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:05.201 "strip_size_kb": 0, 00:11:05.201 "state": "online", 00:11:05.201 "raid_level": "raid1", 00:11:05.201 "superblock": true, 00:11:05.201 "num_base_bdevs": 3, 00:11:05.201 "num_base_bdevs_discovered": 2, 00:11:05.201 "num_base_bdevs_operational": 2, 00:11:05.201 "base_bdevs_list": [ 00:11:05.201 { 00:11:05.201 "name": null, 00:11:05.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.201 "is_configured": false, 00:11:05.201 "data_offset": 2048, 00:11:05.201 "data_size": 63488 00:11:05.201 }, 00:11:05.201 { 00:11:05.201 "name": "pt2", 00:11:05.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.201 "is_configured": true, 00:11:05.201 "data_offset": 2048, 00:11:05.201 "data_size": 63488 00:11:05.201 }, 00:11:05.201 { 00:11:05.201 "name": "pt3", 00:11:05.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.202 "is_configured": true, 00:11:05.202 "data_offset": 2048, 00:11:05.202 "data_size": 63488 00:11:05.202 } 00:11:05.202 ] 00:11:05.202 }' 00:11:05.202 03:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.202 03:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.769 [2024-11-05 03:22:19.270817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.769 [2024-11-05 03:22:19.270855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.769 [2024-11-05 03:22:19.270944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.769 [2024-11-05 03:22:19.271024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.769 [2024-11-05 03:22:19.271040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.769 [2024-11-05 03:22:19.342851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:05.769 [2024-11-05 03:22:19.342924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.769 [2024-11-05 03:22:19.342956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:05.769 [2024-11-05 03:22:19.342971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.769 [2024-11-05 03:22:19.345767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.769 [2024-11-05 03:22:19.345945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:05.769 [2024-11-05 03:22:19.346073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:05.769 [2024-11-05 03:22:19.346130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:05.769 [2024-11-05 03:22:19.346294] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:05.769 [2024-11-05 03:22:19.346328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.769 [2024-11-05 03:22:19.346352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:05.769 [2024-11-05 03:22:19.346419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.769 pt1 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.769 "name": "raid_bdev1", 00:11:05.769 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:05.769 "strip_size_kb": 0, 00:11:05.769 "state": "configuring", 00:11:05.769 "raid_level": "raid1", 00:11:05.769 "superblock": true, 00:11:05.769 "num_base_bdevs": 3, 00:11:05.769 "num_base_bdevs_discovered": 1, 00:11:05.769 "num_base_bdevs_operational": 2, 00:11:05.769 "base_bdevs_list": [ 00:11:05.769 { 00:11:05.769 "name": null, 00:11:05.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.769 "is_configured": false, 00:11:05.769 "data_offset": 2048, 00:11:05.769 "data_size": 63488 00:11:05.769 }, 00:11:05.769 { 00:11:05.769 "name": "pt2", 00:11:05.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.769 "is_configured": true, 00:11:05.769 "data_offset": 2048, 00:11:05.769 "data_size": 63488 00:11:05.769 }, 00:11:05.769 { 00:11:05.769 "name": null, 00:11:05.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.769 "is_configured": false, 00:11:05.769 "data_offset": 2048, 00:11:05.769 "data_size": 63488 00:11:05.769 } 00:11:05.769 ] 00:11:05.769 }' 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.769 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.336 [2024-11-05 03:22:19.903001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.336 [2024-11-05 03:22:19.903093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.336 [2024-11-05 03:22:19.903124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:06.336 [2024-11-05 03:22:19.903139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.336 [2024-11-05 03:22:19.903720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.336 [2024-11-05 03:22:19.903755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.336 [2024-11-05 03:22:19.903856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:06.336 [2024-11-05 03:22:19.903919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.336 [2024-11-05 03:22:19.904082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:06.336 [2024-11-05 03:22:19.904104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:06.336 [2024-11-05 03:22:19.904436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:06.336 [2024-11-05 03:22:19.904643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:06.336 [2024-11-05 03:22:19.904664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:06.336 [2024-11-05 03:22:19.904823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.336 pt3 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.336 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.336 "name": "raid_bdev1", 00:11:06.336 "uuid": "fbb58564-0672-406a-8226-904fa0395978", 00:11:06.336 "strip_size_kb": 0, 00:11:06.336 "state": "online", 00:11:06.336 "raid_level": "raid1", 00:11:06.336 "superblock": true, 00:11:06.336 "num_base_bdevs": 3, 00:11:06.336 "num_base_bdevs_discovered": 2, 00:11:06.336 "num_base_bdevs_operational": 2, 00:11:06.336 "base_bdevs_list": [ 00:11:06.336 { 00:11:06.336 "name": null, 00:11:06.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.336 "is_configured": false, 00:11:06.336 "data_offset": 2048, 00:11:06.336 "data_size": 63488 00:11:06.336 }, 00:11:06.336 { 00:11:06.336 "name": "pt2", 00:11:06.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.336 "is_configured": true, 00:11:06.336 "data_offset": 2048, 00:11:06.336 "data_size": 63488 00:11:06.336 }, 00:11:06.336 { 00:11:06.336 "name": "pt3", 00:11:06.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.337 "is_configured": true, 00:11:06.337 "data_offset": 2048, 00:11:06.337 "data_size": 63488 00:11:06.337 } 00:11:06.337 ] 00:11:06.337 }' 00:11:06.337 03:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.337 03:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.903 [2024-11-05 03:22:20.471491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fbb58564-0672-406a-8226-904fa0395978 '!=' fbb58564-0672-406a-8226-904fa0395978 ']' 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68481 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68481 ']' 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68481 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.903 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68481 00:11:07.161 killing process with pid 68481 00:11:07.161 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:07.161 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:07.161 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68481' 00:11:07.161 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68481 00:11:07.161 [2024-11-05 03:22:20.540645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.161 03:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68481 00:11:07.161 [2024-11-05 03:22:20.540756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.161 [2024-11-05 03:22:20.540834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.161 [2024-11-05 03:22:20.540854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:07.420 [2024-11-05 03:22:20.806026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.355 03:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:08.355 00:11:08.355 real 0m8.542s 00:11:08.355 user 0m14.035s 00:11:08.355 sys 0m1.211s 00:11:08.355 03:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.355 ************************************ 00:11:08.355 END TEST raid_superblock_test 00:11:08.355 ************************************ 00:11:08.355 03:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.355 03:22:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:08.355 03:22:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:08.355 03:22:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.355 03:22:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.355 ************************************ 00:11:08.355 START TEST raid_read_error_test 00:11:08.355 ************************************ 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M38p2ty5iT 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68932 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68932 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 68932 ']' 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.355 03:22:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.355 [2024-11-05 03:22:21.976763] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:08.355 [2024-11-05 03:22:21.976928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68932 ] 00:11:08.614 [2024-11-05 03:22:22.152417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.874 [2024-11-05 03:22:22.277882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.874 [2024-11-05 03:22:22.478203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.874 [2024-11-05 03:22:22.478267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.440 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.440 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.441 BaseBdev1_malloc 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.441 true 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.441 [2024-11-05 03:22:22.965602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.441 [2024-11-05 03:22:22.965722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.441 [2024-11-05 03:22:22.965753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:09.441 [2024-11-05 03:22:22.965772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.441 [2024-11-05 03:22:22.968625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.441 [2024-11-05 03:22:22.968876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.441 BaseBdev1 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.441 03:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.441 BaseBdev2_malloc 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.441 true 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.441 [2024-11-05 03:22:23.024174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:09.441 [2024-11-05 03:22:23.024276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.441 [2024-11-05 03:22:23.024303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:09.441 [2024-11-05 03:22:23.024333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.441 [2024-11-05 03:22:23.027239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.441 [2024-11-05 03:22:23.027287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.441 BaseBdev2 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.441 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 BaseBdev3_malloc 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 true 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 [2024-11-05 03:22:23.092852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:09.700 [2024-11-05 03:22:23.092946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.700 [2024-11-05 03:22:23.092971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:09.700 [2024-11-05 03:22:23.092988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.700 [2024-11-05 03:22:23.095866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.700 [2024-11-05 03:22:23.095929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.700 BaseBdev3 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 [2024-11-05 03:22:23.104948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.700 [2024-11-05 03:22:23.107496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.700 [2024-11-05 03:22:23.107600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.700 [2024-11-05 03:22:23.107907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.700 [2024-11-05 03:22:23.107925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.700 [2024-11-05 03:22:23.108206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:09.700 [2024-11-05 03:22:23.108644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.700 [2024-11-05 03:22:23.108767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:09.700 [2024-11-05 03:22:23.109118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.700 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.700 "name": "raid_bdev1", 00:11:09.700 "uuid": "1e582356-f77c-4f7b-8b09-35766550bc1c", 00:11:09.700 "strip_size_kb": 0, 00:11:09.700 "state": "online", 00:11:09.700 "raid_level": "raid1", 00:11:09.700 "superblock": true, 00:11:09.700 "num_base_bdevs": 3, 00:11:09.700 "num_base_bdevs_discovered": 3, 00:11:09.700 "num_base_bdevs_operational": 3, 00:11:09.700 "base_bdevs_list": [ 00:11:09.700 { 00:11:09.700 "name": "BaseBdev1", 00:11:09.700 "uuid": "aeed5dd3-9868-5e38-a25e-96b63bc315ab", 00:11:09.700 "is_configured": true, 00:11:09.700 "data_offset": 2048, 00:11:09.700 "data_size": 63488 00:11:09.700 }, 00:11:09.700 { 00:11:09.700 "name": "BaseBdev2", 00:11:09.700 "uuid": "0590abbc-9428-5fa3-a177-f8c0b055fbe6", 00:11:09.700 "is_configured": true, 00:11:09.700 "data_offset": 2048, 00:11:09.700 "data_size": 63488 00:11:09.700 }, 00:11:09.700 { 00:11:09.700 "name": "BaseBdev3", 00:11:09.700 "uuid": "40851598-a8fe-51ab-ad7a-f32650595aa4", 00:11:09.701 "is_configured": true, 00:11:09.701 "data_offset": 2048, 00:11:09.701 "data_size": 63488 00:11:09.701 } 00:11:09.701 ] 00:11:09.701 }' 00:11:09.701 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.701 03:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.002 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.002 03:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.260 [2024-11-05 03:22:23.750684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.197 "name": "raid_bdev1", 00:11:11.197 "uuid": "1e582356-f77c-4f7b-8b09-35766550bc1c", 00:11:11.197 "strip_size_kb": 0, 00:11:11.197 "state": "online", 00:11:11.197 "raid_level": "raid1", 00:11:11.197 "superblock": true, 00:11:11.197 "num_base_bdevs": 3, 00:11:11.197 "num_base_bdevs_discovered": 3, 00:11:11.197 "num_base_bdevs_operational": 3, 00:11:11.197 "base_bdevs_list": [ 00:11:11.197 { 00:11:11.197 "name": "BaseBdev1", 00:11:11.197 "uuid": "aeed5dd3-9868-5e38-a25e-96b63bc315ab", 00:11:11.197 "is_configured": true, 00:11:11.197 "data_offset": 2048, 00:11:11.197 "data_size": 63488 00:11:11.197 }, 00:11:11.197 { 00:11:11.197 "name": "BaseBdev2", 00:11:11.197 "uuid": "0590abbc-9428-5fa3-a177-f8c0b055fbe6", 00:11:11.197 "is_configured": true, 00:11:11.197 "data_offset": 2048, 00:11:11.197 "data_size": 63488 00:11:11.197 }, 00:11:11.197 { 00:11:11.197 "name": "BaseBdev3", 00:11:11.197 "uuid": "40851598-a8fe-51ab-ad7a-f32650595aa4", 00:11:11.197 "is_configured": true, 00:11:11.197 "data_offset": 2048, 00:11:11.197 "data_size": 63488 00:11:11.197 } 00:11:11.197 ] 00:11:11.197 }' 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.197 03:22:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.765 [2024-11-05 03:22:25.154151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.765 [2024-11-05 03:22:25.154184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.765 [2024-11-05 03:22:25.157500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.765 [2024-11-05 03:22:25.157559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.765 [2024-11-05 03:22:25.157703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.765 [2024-11-05 03:22:25.157721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:11.765 { 00:11:11.765 "results": [ 00:11:11.765 { 00:11:11.765 "job": "raid_bdev1", 00:11:11.765 "core_mask": "0x1", 00:11:11.765 "workload": "randrw", 00:11:11.765 "percentage": 50, 00:11:11.765 "status": "finished", 00:11:11.765 "queue_depth": 1, 00:11:11.765 "io_size": 131072, 00:11:11.765 "runtime": 1.40112, 00:11:11.765 "iops": 10099.777321000343, 00:11:11.765 "mibps": 1262.472165125043, 00:11:11.765 "io_failed": 0, 00:11:11.765 "io_timeout": 0, 00:11:11.765 "avg_latency_us": 95.00347961274821, 00:11:11.765 "min_latency_us": 38.167272727272724, 00:11:11.765 "max_latency_us": 1921.3963636363637 00:11:11.765 } 00:11:11.765 ], 00:11:11.765 "core_count": 1 00:11:11.765 } 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68932 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 68932 ']' 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 68932 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68932 00:11:11.765 killing process with pid 68932 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68932' 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 68932 00:11:11.765 [2024-11-05 03:22:25.192336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.765 03:22:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 68932 00:11:11.765 [2024-11-05 03:22:25.391028] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M38p2ty5iT 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:13.143 00:11:13.143 real 0m4.540s 00:11:13.143 user 0m5.667s 00:11:13.143 sys 0m0.551s 00:11:13.143 ************************************ 00:11:13.143 END TEST raid_read_error_test 00:11:13.143 ************************************ 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.143 03:22:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.143 03:22:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:13.143 03:22:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:13.143 03:22:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.143 03:22:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.143 ************************************ 00:11:13.143 START TEST raid_write_error_test 00:11:13.143 ************************************ 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TG5oLqrfTX 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69078 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69078 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69078 ']' 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.143 03:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.143 [2024-11-05 03:22:26.550666] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:13.144 [2024-11-05 03:22:26.550882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69078 ] 00:11:13.144 [2024-11-05 03:22:26.721612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.403 [2024-11-05 03:22:26.842217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.661 [2024-11-05 03:22:27.041581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.661 [2024-11-05 03:22:27.041617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.921 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:13.921 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:13.921 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.921 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.921 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.921 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 BaseBdev1_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 true 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 [2024-11-05 03:22:27.594974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.185 [2024-11-05 03:22:27.595057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.185 [2024-11-05 03:22:27.595091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:14.185 [2024-11-05 03:22:27.595108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.185 [2024-11-05 03:22:27.597946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.185 [2024-11-05 03:22:27.598052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.185 BaseBdev1 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 BaseBdev2_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 true 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 [2024-11-05 03:22:27.648693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:14.185 [2024-11-05 03:22:27.648773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.185 [2024-11-05 03:22:27.648797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:14.185 [2024-11-05 03:22:27.648813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.185 [2024-11-05 03:22:27.651609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.185 [2024-11-05 03:22:27.651672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.185 BaseBdev2 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 BaseBdev3_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 true 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 [2024-11-05 03:22:27.707930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:14.185 [2024-11-05 03:22:27.708009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.185 [2024-11-05 03:22:27.708035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:14.185 [2024-11-05 03:22:27.708051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.185 [2024-11-05 03:22:27.710960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.185 [2024-11-05 03:22:27.711024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:14.185 BaseBdev3 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 [2024-11-05 03:22:27.716021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.185 [2024-11-05 03:22:27.718696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.185 [2024-11-05 03:22:27.718799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.185 [2024-11-05 03:22:27.719075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:14.185 [2024-11-05 03:22:27.719095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.185 [2024-11-05 03:22:27.719552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:14.185 [2024-11-05 03:22:27.719934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:14.185 [2024-11-05 03:22:27.720065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:14.185 [2024-11-05 03:22:27.720449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.185 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.185 "name": "raid_bdev1", 00:11:14.185 "uuid": "12b4542a-1f5d-41b1-810c-31f7a5e3af38", 00:11:14.185 "strip_size_kb": 0, 00:11:14.185 "state": "online", 00:11:14.185 "raid_level": "raid1", 00:11:14.185 "superblock": true, 00:11:14.185 "num_base_bdevs": 3, 00:11:14.185 "num_base_bdevs_discovered": 3, 00:11:14.185 "num_base_bdevs_operational": 3, 00:11:14.185 "base_bdevs_list": [ 00:11:14.185 { 00:11:14.185 "name": "BaseBdev1", 00:11:14.186 "uuid": "5839bef0-6205-5a4b-8300-a11fca57b99a", 00:11:14.186 "is_configured": true, 00:11:14.186 "data_offset": 2048, 00:11:14.186 "data_size": 63488 00:11:14.186 }, 00:11:14.186 { 00:11:14.186 "name": "BaseBdev2", 00:11:14.186 "uuid": "65127370-609b-5045-9603-327cf7eb698a", 00:11:14.186 "is_configured": true, 00:11:14.186 "data_offset": 2048, 00:11:14.186 "data_size": 63488 00:11:14.186 }, 00:11:14.186 { 00:11:14.186 "name": "BaseBdev3", 00:11:14.186 "uuid": "124b549b-1d4d-5a18-9cb0-d5f960c1192d", 00:11:14.186 "is_configured": true, 00:11:14.186 "data_offset": 2048, 00:11:14.186 "data_size": 63488 00:11:14.186 } 00:11:14.186 ] 00:11:14.186 }' 00:11:14.186 03:22:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.186 03:22:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.757 03:22:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:14.757 03:22:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.757 [2024-11-05 03:22:28.314049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.692 [2024-11-05 03:22:29.222940] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:15.692 [2024-11-05 03:22:29.223017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.692 [2024-11-05 03:22:29.223288] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.692 "name": "raid_bdev1", 00:11:15.692 "uuid": "12b4542a-1f5d-41b1-810c-31f7a5e3af38", 00:11:15.692 "strip_size_kb": 0, 00:11:15.692 "state": "online", 00:11:15.692 "raid_level": "raid1", 00:11:15.692 "superblock": true, 00:11:15.692 "num_base_bdevs": 3, 00:11:15.692 "num_base_bdevs_discovered": 2, 00:11:15.692 "num_base_bdevs_operational": 2, 00:11:15.692 "base_bdevs_list": [ 00:11:15.692 { 00:11:15.692 "name": null, 00:11:15.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.692 "is_configured": false, 00:11:15.692 "data_offset": 0, 00:11:15.692 "data_size": 63488 00:11:15.692 }, 00:11:15.692 { 00:11:15.692 "name": "BaseBdev2", 00:11:15.692 "uuid": "65127370-609b-5045-9603-327cf7eb698a", 00:11:15.692 "is_configured": true, 00:11:15.692 "data_offset": 2048, 00:11:15.692 "data_size": 63488 00:11:15.692 }, 00:11:15.692 { 00:11:15.692 "name": "BaseBdev3", 00:11:15.692 "uuid": "124b549b-1d4d-5a18-9cb0-d5f960c1192d", 00:11:15.692 "is_configured": true, 00:11:15.692 "data_offset": 2048, 00:11:15.692 "data_size": 63488 00:11:15.692 } 00:11:15.692 ] 00:11:15.692 }' 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.692 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.258 [2024-11-05 03:22:29.816125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.258 [2024-11-05 03:22:29.816167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.258 [2024-11-05 03:22:29.819468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.258 [2024-11-05 03:22:29.819543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.258 [2024-11-05 03:22:29.819649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.258 [2024-11-05 03:22:29.819673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:16.258 { 00:11:16.258 "results": [ 00:11:16.258 { 00:11:16.258 "job": "raid_bdev1", 00:11:16.258 "core_mask": "0x1", 00:11:16.258 "workload": "randrw", 00:11:16.258 "percentage": 50, 00:11:16.258 "status": "finished", 00:11:16.258 "queue_depth": 1, 00:11:16.258 "io_size": 131072, 00:11:16.258 "runtime": 1.499381, 00:11:16.258 "iops": 11079.238699169857, 00:11:16.258 "mibps": 1384.904837396232, 00:11:16.258 "io_failed": 0, 00:11:16.258 "io_timeout": 0, 00:11:16.258 "avg_latency_us": 86.1179729877635, 00:11:16.258 "min_latency_us": 38.167272727272724, 00:11:16.258 "max_latency_us": 1995.8690909090908 00:11:16.258 } 00:11:16.258 ], 00:11:16.258 "core_count": 1 00:11:16.258 } 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69078 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69078 ']' 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69078 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69078 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.258 killing process with pid 69078 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69078' 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69078 00:11:16.258 [2024-11-05 03:22:29.853755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.258 03:22:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69078 00:11:16.516 [2024-11-05 03:22:30.051389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TG5oLqrfTX 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:17.944 00:11:17.944 real 0m4.666s 00:11:17.944 user 0m5.842s 00:11:17.944 sys 0m0.548s 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.944 03:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.944 ************************************ 00:11:17.944 END TEST raid_write_error_test 00:11:17.944 ************************************ 00:11:17.944 03:22:31 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:17.944 03:22:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:17.944 03:22:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:17.944 03:22:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:17.944 03:22:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.944 03:22:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.944 ************************************ 00:11:17.944 START TEST raid_state_function_test 00:11:17.944 ************************************ 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69217 00:11:17.944 Process raid pid: 69217 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69217' 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69217 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69217 ']' 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:17.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:17.944 03:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.944 [2024-11-05 03:22:31.286398] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:17.944 [2024-11-05 03:22:31.286592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.945 [2024-11-05 03:22:31.472809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.202 [2024-11-05 03:22:31.600839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.202 [2024-11-05 03:22:31.803625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.202 [2024-11-05 03:22:31.803678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.769 [2024-11-05 03:22:32.271441] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.769 [2024-11-05 03:22:32.271515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.769 [2024-11-05 03:22:32.271534] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.769 [2024-11-05 03:22:32.271551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.769 [2024-11-05 03:22:32.271561] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.769 [2024-11-05 03:22:32.271575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.769 [2024-11-05 03:22:32.271585] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.769 [2024-11-05 03:22:32.271599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.769 "name": "Existed_Raid", 00:11:18.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.769 "strip_size_kb": 64, 00:11:18.769 "state": "configuring", 00:11:18.769 "raid_level": "raid0", 00:11:18.769 "superblock": false, 00:11:18.769 "num_base_bdevs": 4, 00:11:18.769 "num_base_bdevs_discovered": 0, 00:11:18.769 "num_base_bdevs_operational": 4, 00:11:18.769 "base_bdevs_list": [ 00:11:18.769 { 00:11:18.769 "name": "BaseBdev1", 00:11:18.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.769 "is_configured": false, 00:11:18.769 "data_offset": 0, 00:11:18.769 "data_size": 0 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "name": "BaseBdev2", 00:11:18.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.769 "is_configured": false, 00:11:18.769 "data_offset": 0, 00:11:18.769 "data_size": 0 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "name": "BaseBdev3", 00:11:18.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.769 "is_configured": false, 00:11:18.769 "data_offset": 0, 00:11:18.769 "data_size": 0 00:11:18.769 }, 00:11:18.769 { 00:11:18.769 "name": "BaseBdev4", 00:11:18.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.769 "is_configured": false, 00:11:18.769 "data_offset": 0, 00:11:18.769 "data_size": 0 00:11:18.769 } 00:11:18.769 ] 00:11:18.769 }' 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.769 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 [2024-11-05 03:22:32.787537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.336 [2024-11-05 03:22:32.787589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 [2024-11-05 03:22:32.795519] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.336 [2024-11-05 03:22:32.795572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.336 [2024-11-05 03:22:32.795587] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.336 [2024-11-05 03:22:32.795603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.336 [2024-11-05 03:22:32.795613] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.336 [2024-11-05 03:22:32.795626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.336 [2024-11-05 03:22:32.795636] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.336 [2024-11-05 03:22:32.795650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 [2024-11-05 03:22:32.839874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.336 BaseBdev1 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 [ 00:11:19.336 { 00:11:19.336 "name": "BaseBdev1", 00:11:19.336 "aliases": [ 00:11:19.336 "b479a122-0450-466a-adbd-6cb577147937" 00:11:19.336 ], 00:11:19.336 "product_name": "Malloc disk", 00:11:19.336 "block_size": 512, 00:11:19.336 "num_blocks": 65536, 00:11:19.336 "uuid": "b479a122-0450-466a-adbd-6cb577147937", 00:11:19.336 "assigned_rate_limits": { 00:11:19.336 "rw_ios_per_sec": 0, 00:11:19.336 "rw_mbytes_per_sec": 0, 00:11:19.336 "r_mbytes_per_sec": 0, 00:11:19.336 "w_mbytes_per_sec": 0 00:11:19.336 }, 00:11:19.336 "claimed": true, 00:11:19.336 "claim_type": "exclusive_write", 00:11:19.336 "zoned": false, 00:11:19.336 "supported_io_types": { 00:11:19.336 "read": true, 00:11:19.336 "write": true, 00:11:19.336 "unmap": true, 00:11:19.336 "flush": true, 00:11:19.336 "reset": true, 00:11:19.336 "nvme_admin": false, 00:11:19.336 "nvme_io": false, 00:11:19.336 "nvme_io_md": false, 00:11:19.336 "write_zeroes": true, 00:11:19.336 "zcopy": true, 00:11:19.336 "get_zone_info": false, 00:11:19.336 "zone_management": false, 00:11:19.336 "zone_append": false, 00:11:19.336 "compare": false, 00:11:19.336 "compare_and_write": false, 00:11:19.336 "abort": true, 00:11:19.336 "seek_hole": false, 00:11:19.336 "seek_data": false, 00:11:19.336 "copy": true, 00:11:19.336 "nvme_iov_md": false 00:11:19.336 }, 00:11:19.336 "memory_domains": [ 00:11:19.336 { 00:11:19.336 "dma_device_id": "system", 00:11:19.336 "dma_device_type": 1 00:11:19.336 }, 00:11:19.336 { 00:11:19.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.336 "dma_device_type": 2 00:11:19.336 } 00:11:19.336 ], 00:11:19.336 "driver_specific": {} 00:11:19.336 } 00:11:19.336 ] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.336 "name": "Existed_Raid", 00:11:19.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.336 "strip_size_kb": 64, 00:11:19.336 "state": "configuring", 00:11:19.336 "raid_level": "raid0", 00:11:19.336 "superblock": false, 00:11:19.336 "num_base_bdevs": 4, 00:11:19.336 "num_base_bdevs_discovered": 1, 00:11:19.336 "num_base_bdevs_operational": 4, 00:11:19.336 "base_bdevs_list": [ 00:11:19.336 { 00:11:19.336 "name": "BaseBdev1", 00:11:19.336 "uuid": "b479a122-0450-466a-adbd-6cb577147937", 00:11:19.336 "is_configured": true, 00:11:19.336 "data_offset": 0, 00:11:19.336 "data_size": 65536 00:11:19.336 }, 00:11:19.336 { 00:11:19.336 "name": "BaseBdev2", 00:11:19.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.336 "is_configured": false, 00:11:19.336 "data_offset": 0, 00:11:19.336 "data_size": 0 00:11:19.336 }, 00:11:19.336 { 00:11:19.336 "name": "BaseBdev3", 00:11:19.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.336 "is_configured": false, 00:11:19.336 "data_offset": 0, 00:11:19.336 "data_size": 0 00:11:19.336 }, 00:11:19.336 { 00:11:19.336 "name": "BaseBdev4", 00:11:19.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.336 "is_configured": false, 00:11:19.336 "data_offset": 0, 00:11:19.336 "data_size": 0 00:11:19.336 } 00:11:19.336 ] 00:11:19.336 }' 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.336 03:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.903 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.903 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.903 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.903 [2024-11-05 03:22:33.408060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.903 [2024-11-05 03:22:33.408124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:19.903 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.903 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.903 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.904 [2024-11-05 03:22:33.416112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.904 [2024-11-05 03:22:33.418503] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.904 [2024-11-05 03:22:33.418562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.904 [2024-11-05 03:22:33.418578] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.904 [2024-11-05 03:22:33.418595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.904 [2024-11-05 03:22:33.418606] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.904 [2024-11-05 03:22:33.418619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.904 "name": "Existed_Raid", 00:11:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.904 "strip_size_kb": 64, 00:11:19.904 "state": "configuring", 00:11:19.904 "raid_level": "raid0", 00:11:19.904 "superblock": false, 00:11:19.904 "num_base_bdevs": 4, 00:11:19.904 "num_base_bdevs_discovered": 1, 00:11:19.904 "num_base_bdevs_operational": 4, 00:11:19.904 "base_bdevs_list": [ 00:11:19.904 { 00:11:19.904 "name": "BaseBdev1", 00:11:19.904 "uuid": "b479a122-0450-466a-adbd-6cb577147937", 00:11:19.904 "is_configured": true, 00:11:19.904 "data_offset": 0, 00:11:19.904 "data_size": 65536 00:11:19.904 }, 00:11:19.904 { 00:11:19.904 "name": "BaseBdev2", 00:11:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.904 "is_configured": false, 00:11:19.904 "data_offset": 0, 00:11:19.904 "data_size": 0 00:11:19.904 }, 00:11:19.904 { 00:11:19.904 "name": "BaseBdev3", 00:11:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.904 "is_configured": false, 00:11:19.904 "data_offset": 0, 00:11:19.904 "data_size": 0 00:11:19.904 }, 00:11:19.904 { 00:11:19.904 "name": "BaseBdev4", 00:11:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.904 "is_configured": false, 00:11:19.904 "data_offset": 0, 00:11:19.904 "data_size": 0 00:11:19.904 } 00:11:19.904 ] 00:11:19.904 }' 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.904 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.471 [2024-11-05 03:22:33.994031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.471 BaseBdev2 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.471 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.472 03:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.472 [ 00:11:20.472 { 00:11:20.472 "name": "BaseBdev2", 00:11:20.472 "aliases": [ 00:11:20.472 "e909f0d4-0022-4ac9-a1d5-fb1d197e1c27" 00:11:20.472 ], 00:11:20.472 "product_name": "Malloc disk", 00:11:20.472 "block_size": 512, 00:11:20.472 "num_blocks": 65536, 00:11:20.472 "uuid": "e909f0d4-0022-4ac9-a1d5-fb1d197e1c27", 00:11:20.472 "assigned_rate_limits": { 00:11:20.472 "rw_ios_per_sec": 0, 00:11:20.472 "rw_mbytes_per_sec": 0, 00:11:20.472 "r_mbytes_per_sec": 0, 00:11:20.472 "w_mbytes_per_sec": 0 00:11:20.472 }, 00:11:20.472 "claimed": true, 00:11:20.472 "claim_type": "exclusive_write", 00:11:20.472 "zoned": false, 00:11:20.472 "supported_io_types": { 00:11:20.472 "read": true, 00:11:20.472 "write": true, 00:11:20.472 "unmap": true, 00:11:20.472 "flush": true, 00:11:20.472 "reset": true, 00:11:20.472 "nvme_admin": false, 00:11:20.472 "nvme_io": false, 00:11:20.472 "nvme_io_md": false, 00:11:20.472 "write_zeroes": true, 00:11:20.472 "zcopy": true, 00:11:20.472 "get_zone_info": false, 00:11:20.472 "zone_management": false, 00:11:20.472 "zone_append": false, 00:11:20.472 "compare": false, 00:11:20.472 "compare_and_write": false, 00:11:20.472 "abort": true, 00:11:20.472 "seek_hole": false, 00:11:20.472 "seek_data": false, 00:11:20.472 "copy": true, 00:11:20.472 "nvme_iov_md": false 00:11:20.472 }, 00:11:20.472 "memory_domains": [ 00:11:20.472 { 00:11:20.472 "dma_device_id": "system", 00:11:20.472 "dma_device_type": 1 00:11:20.472 }, 00:11:20.472 { 00:11:20.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.472 "dma_device_type": 2 00:11:20.472 } 00:11:20.472 ], 00:11:20.472 "driver_specific": {} 00:11:20.472 } 00:11:20.472 ] 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.472 "name": "Existed_Raid", 00:11:20.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.472 "strip_size_kb": 64, 00:11:20.472 "state": "configuring", 00:11:20.472 "raid_level": "raid0", 00:11:20.472 "superblock": false, 00:11:20.472 "num_base_bdevs": 4, 00:11:20.472 "num_base_bdevs_discovered": 2, 00:11:20.472 "num_base_bdevs_operational": 4, 00:11:20.472 "base_bdevs_list": [ 00:11:20.472 { 00:11:20.472 "name": "BaseBdev1", 00:11:20.472 "uuid": "b479a122-0450-466a-adbd-6cb577147937", 00:11:20.472 "is_configured": true, 00:11:20.472 "data_offset": 0, 00:11:20.472 "data_size": 65536 00:11:20.472 }, 00:11:20.472 { 00:11:20.472 "name": "BaseBdev2", 00:11:20.472 "uuid": "e909f0d4-0022-4ac9-a1d5-fb1d197e1c27", 00:11:20.472 "is_configured": true, 00:11:20.472 "data_offset": 0, 00:11:20.472 "data_size": 65536 00:11:20.472 }, 00:11:20.472 { 00:11:20.472 "name": "BaseBdev3", 00:11:20.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.472 "is_configured": false, 00:11:20.472 "data_offset": 0, 00:11:20.472 "data_size": 0 00:11:20.472 }, 00:11:20.472 { 00:11:20.472 "name": "BaseBdev4", 00:11:20.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.472 "is_configured": false, 00:11:20.472 "data_offset": 0, 00:11:20.472 "data_size": 0 00:11:20.472 } 00:11:20.472 ] 00:11:20.472 }' 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.472 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.038 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:21.038 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.038 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.038 [2024-11-05 03:22:34.605578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.038 BaseBdev3 00:11:21.038 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.038 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:21.038 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:21.038 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.039 [ 00:11:21.039 { 00:11:21.039 "name": "BaseBdev3", 00:11:21.039 "aliases": [ 00:11:21.039 "297ff432-d422-44f1-b746-fc7457e2ce41" 00:11:21.039 ], 00:11:21.039 "product_name": "Malloc disk", 00:11:21.039 "block_size": 512, 00:11:21.039 "num_blocks": 65536, 00:11:21.039 "uuid": "297ff432-d422-44f1-b746-fc7457e2ce41", 00:11:21.039 "assigned_rate_limits": { 00:11:21.039 "rw_ios_per_sec": 0, 00:11:21.039 "rw_mbytes_per_sec": 0, 00:11:21.039 "r_mbytes_per_sec": 0, 00:11:21.039 "w_mbytes_per_sec": 0 00:11:21.039 }, 00:11:21.039 "claimed": true, 00:11:21.039 "claim_type": "exclusive_write", 00:11:21.039 "zoned": false, 00:11:21.039 "supported_io_types": { 00:11:21.039 "read": true, 00:11:21.039 "write": true, 00:11:21.039 "unmap": true, 00:11:21.039 "flush": true, 00:11:21.039 "reset": true, 00:11:21.039 "nvme_admin": false, 00:11:21.039 "nvme_io": false, 00:11:21.039 "nvme_io_md": false, 00:11:21.039 "write_zeroes": true, 00:11:21.039 "zcopy": true, 00:11:21.039 "get_zone_info": false, 00:11:21.039 "zone_management": false, 00:11:21.039 "zone_append": false, 00:11:21.039 "compare": false, 00:11:21.039 "compare_and_write": false, 00:11:21.039 "abort": true, 00:11:21.039 "seek_hole": false, 00:11:21.039 "seek_data": false, 00:11:21.039 "copy": true, 00:11:21.039 "nvme_iov_md": false 00:11:21.039 }, 00:11:21.039 "memory_domains": [ 00:11:21.039 { 00:11:21.039 "dma_device_id": "system", 00:11:21.039 "dma_device_type": 1 00:11:21.039 }, 00:11:21.039 { 00:11:21.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.039 "dma_device_type": 2 00:11:21.039 } 00:11:21.039 ], 00:11:21.039 "driver_specific": {} 00:11:21.039 } 00:11:21.039 ] 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.039 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.297 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.297 "name": "Existed_Raid", 00:11:21.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.297 "strip_size_kb": 64, 00:11:21.297 "state": "configuring", 00:11:21.297 "raid_level": "raid0", 00:11:21.297 "superblock": false, 00:11:21.297 "num_base_bdevs": 4, 00:11:21.297 "num_base_bdevs_discovered": 3, 00:11:21.297 "num_base_bdevs_operational": 4, 00:11:21.297 "base_bdevs_list": [ 00:11:21.297 { 00:11:21.297 "name": "BaseBdev1", 00:11:21.297 "uuid": "b479a122-0450-466a-adbd-6cb577147937", 00:11:21.297 "is_configured": true, 00:11:21.297 "data_offset": 0, 00:11:21.297 "data_size": 65536 00:11:21.297 }, 00:11:21.297 { 00:11:21.297 "name": "BaseBdev2", 00:11:21.297 "uuid": "e909f0d4-0022-4ac9-a1d5-fb1d197e1c27", 00:11:21.297 "is_configured": true, 00:11:21.297 "data_offset": 0, 00:11:21.297 "data_size": 65536 00:11:21.297 }, 00:11:21.297 { 00:11:21.297 "name": "BaseBdev3", 00:11:21.297 "uuid": "297ff432-d422-44f1-b746-fc7457e2ce41", 00:11:21.297 "is_configured": true, 00:11:21.297 "data_offset": 0, 00:11:21.297 "data_size": 65536 00:11:21.297 }, 00:11:21.297 { 00:11:21.297 "name": "BaseBdev4", 00:11:21.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.297 "is_configured": false, 00:11:21.297 "data_offset": 0, 00:11:21.297 "data_size": 0 00:11:21.297 } 00:11:21.297 ] 00:11:21.297 }' 00:11:21.297 03:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.297 03:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.555 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:21.555 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.555 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.814 [2024-11-05 03:22:35.203837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.814 [2024-11-05 03:22:35.203901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.814 [2024-11-05 03:22:35.203917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:21.814 [2024-11-05 03:22:35.204266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:21.814 [2024-11-05 03:22:35.204519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.814 [2024-11-05 03:22:35.204553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:21.814 [2024-11-05 03:22:35.204861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.814 BaseBdev4 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.814 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.814 [ 00:11:21.814 { 00:11:21.814 "name": "BaseBdev4", 00:11:21.814 "aliases": [ 00:11:21.814 "93b679fc-4ccd-47a2-b71e-1bf50a4cece6" 00:11:21.814 ], 00:11:21.814 "product_name": "Malloc disk", 00:11:21.814 "block_size": 512, 00:11:21.814 "num_blocks": 65536, 00:11:21.814 "uuid": "93b679fc-4ccd-47a2-b71e-1bf50a4cece6", 00:11:21.814 "assigned_rate_limits": { 00:11:21.814 "rw_ios_per_sec": 0, 00:11:21.814 "rw_mbytes_per_sec": 0, 00:11:21.814 "r_mbytes_per_sec": 0, 00:11:21.814 "w_mbytes_per_sec": 0 00:11:21.814 }, 00:11:21.814 "claimed": true, 00:11:21.814 "claim_type": "exclusive_write", 00:11:21.814 "zoned": false, 00:11:21.814 "supported_io_types": { 00:11:21.814 "read": true, 00:11:21.814 "write": true, 00:11:21.814 "unmap": true, 00:11:21.814 "flush": true, 00:11:21.814 "reset": true, 00:11:21.814 "nvme_admin": false, 00:11:21.814 "nvme_io": false, 00:11:21.814 "nvme_io_md": false, 00:11:21.814 "write_zeroes": true, 00:11:21.814 "zcopy": true, 00:11:21.814 "get_zone_info": false, 00:11:21.814 "zone_management": false, 00:11:21.814 "zone_append": false, 00:11:21.814 "compare": false, 00:11:21.814 "compare_and_write": false, 00:11:21.814 "abort": true, 00:11:21.814 "seek_hole": false, 00:11:21.814 "seek_data": false, 00:11:21.814 "copy": true, 00:11:21.814 "nvme_iov_md": false 00:11:21.815 }, 00:11:21.815 "memory_domains": [ 00:11:21.815 { 00:11:21.815 "dma_device_id": "system", 00:11:21.815 "dma_device_type": 1 00:11:21.815 }, 00:11:21.815 { 00:11:21.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.815 "dma_device_type": 2 00:11:21.815 } 00:11:21.815 ], 00:11:21.815 "driver_specific": {} 00:11:21.815 } 00:11:21.815 ] 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.815 "name": "Existed_Raid", 00:11:21.815 "uuid": "426a643b-aecd-4c5f-9913-8b128d07694f", 00:11:21.815 "strip_size_kb": 64, 00:11:21.815 "state": "online", 00:11:21.815 "raid_level": "raid0", 00:11:21.815 "superblock": false, 00:11:21.815 "num_base_bdevs": 4, 00:11:21.815 "num_base_bdevs_discovered": 4, 00:11:21.815 "num_base_bdevs_operational": 4, 00:11:21.815 "base_bdevs_list": [ 00:11:21.815 { 00:11:21.815 "name": "BaseBdev1", 00:11:21.815 "uuid": "b479a122-0450-466a-adbd-6cb577147937", 00:11:21.815 "is_configured": true, 00:11:21.815 "data_offset": 0, 00:11:21.815 "data_size": 65536 00:11:21.815 }, 00:11:21.815 { 00:11:21.815 "name": "BaseBdev2", 00:11:21.815 "uuid": "e909f0d4-0022-4ac9-a1d5-fb1d197e1c27", 00:11:21.815 "is_configured": true, 00:11:21.815 "data_offset": 0, 00:11:21.815 "data_size": 65536 00:11:21.815 }, 00:11:21.815 { 00:11:21.815 "name": "BaseBdev3", 00:11:21.815 "uuid": "297ff432-d422-44f1-b746-fc7457e2ce41", 00:11:21.815 "is_configured": true, 00:11:21.815 "data_offset": 0, 00:11:21.815 "data_size": 65536 00:11:21.815 }, 00:11:21.815 { 00:11:21.815 "name": "BaseBdev4", 00:11:21.815 "uuid": "93b679fc-4ccd-47a2-b71e-1bf50a4cece6", 00:11:21.815 "is_configured": true, 00:11:21.815 "data_offset": 0, 00:11:21.815 "data_size": 65536 00:11:21.815 } 00:11:21.815 ] 00:11:21.815 }' 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.815 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.383 [2024-11-05 03:22:35.756493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.383 "name": "Existed_Raid", 00:11:22.383 "aliases": [ 00:11:22.383 "426a643b-aecd-4c5f-9913-8b128d07694f" 00:11:22.383 ], 00:11:22.383 "product_name": "Raid Volume", 00:11:22.383 "block_size": 512, 00:11:22.383 "num_blocks": 262144, 00:11:22.383 "uuid": "426a643b-aecd-4c5f-9913-8b128d07694f", 00:11:22.383 "assigned_rate_limits": { 00:11:22.383 "rw_ios_per_sec": 0, 00:11:22.383 "rw_mbytes_per_sec": 0, 00:11:22.383 "r_mbytes_per_sec": 0, 00:11:22.383 "w_mbytes_per_sec": 0 00:11:22.383 }, 00:11:22.383 "claimed": false, 00:11:22.383 "zoned": false, 00:11:22.383 "supported_io_types": { 00:11:22.383 "read": true, 00:11:22.383 "write": true, 00:11:22.383 "unmap": true, 00:11:22.383 "flush": true, 00:11:22.383 "reset": true, 00:11:22.383 "nvme_admin": false, 00:11:22.383 "nvme_io": false, 00:11:22.383 "nvme_io_md": false, 00:11:22.383 "write_zeroes": true, 00:11:22.383 "zcopy": false, 00:11:22.383 "get_zone_info": false, 00:11:22.383 "zone_management": false, 00:11:22.383 "zone_append": false, 00:11:22.383 "compare": false, 00:11:22.383 "compare_and_write": false, 00:11:22.383 "abort": false, 00:11:22.383 "seek_hole": false, 00:11:22.383 "seek_data": false, 00:11:22.383 "copy": false, 00:11:22.383 "nvme_iov_md": false 00:11:22.383 }, 00:11:22.383 "memory_domains": [ 00:11:22.383 { 00:11:22.383 "dma_device_id": "system", 00:11:22.383 "dma_device_type": 1 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.383 "dma_device_type": 2 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "dma_device_id": "system", 00:11:22.383 "dma_device_type": 1 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.383 "dma_device_type": 2 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "dma_device_id": "system", 00:11:22.383 "dma_device_type": 1 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.383 "dma_device_type": 2 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "dma_device_id": "system", 00:11:22.383 "dma_device_type": 1 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.383 "dma_device_type": 2 00:11:22.383 } 00:11:22.383 ], 00:11:22.383 "driver_specific": { 00:11:22.383 "raid": { 00:11:22.383 "uuid": "426a643b-aecd-4c5f-9913-8b128d07694f", 00:11:22.383 "strip_size_kb": 64, 00:11:22.383 "state": "online", 00:11:22.383 "raid_level": "raid0", 00:11:22.383 "superblock": false, 00:11:22.383 "num_base_bdevs": 4, 00:11:22.383 "num_base_bdevs_discovered": 4, 00:11:22.383 "num_base_bdevs_operational": 4, 00:11:22.383 "base_bdevs_list": [ 00:11:22.383 { 00:11:22.383 "name": "BaseBdev1", 00:11:22.383 "uuid": "b479a122-0450-466a-adbd-6cb577147937", 00:11:22.383 "is_configured": true, 00:11:22.383 "data_offset": 0, 00:11:22.383 "data_size": 65536 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "name": "BaseBdev2", 00:11:22.383 "uuid": "e909f0d4-0022-4ac9-a1d5-fb1d197e1c27", 00:11:22.383 "is_configured": true, 00:11:22.383 "data_offset": 0, 00:11:22.383 "data_size": 65536 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "name": "BaseBdev3", 00:11:22.383 "uuid": "297ff432-d422-44f1-b746-fc7457e2ce41", 00:11:22.383 "is_configured": true, 00:11:22.383 "data_offset": 0, 00:11:22.383 "data_size": 65536 00:11:22.383 }, 00:11:22.383 { 00:11:22.383 "name": "BaseBdev4", 00:11:22.383 "uuid": "93b679fc-4ccd-47a2-b71e-1bf50a4cece6", 00:11:22.383 "is_configured": true, 00:11:22.383 "data_offset": 0, 00:11:22.383 "data_size": 65536 00:11:22.383 } 00:11:22.383 ] 00:11:22.383 } 00:11:22.383 } 00:11:22.383 }' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:22.383 BaseBdev2 00:11:22.383 BaseBdev3 00:11:22.383 BaseBdev4' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.383 03:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.383 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.383 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.383 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.383 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.383 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.383 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.383 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.642 [2024-11-05 03:22:36.116193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.642 [2024-11-05 03:22:36.116249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.642 [2024-11-05 03:22:36.116330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.642 "name": "Existed_Raid", 00:11:22.642 "uuid": "426a643b-aecd-4c5f-9913-8b128d07694f", 00:11:22.642 "strip_size_kb": 64, 00:11:22.642 "state": "offline", 00:11:22.642 "raid_level": "raid0", 00:11:22.642 "superblock": false, 00:11:22.642 "num_base_bdevs": 4, 00:11:22.642 "num_base_bdevs_discovered": 3, 00:11:22.642 "num_base_bdevs_operational": 3, 00:11:22.642 "base_bdevs_list": [ 00:11:22.642 { 00:11:22.642 "name": null, 00:11:22.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.642 "is_configured": false, 00:11:22.642 "data_offset": 0, 00:11:22.642 "data_size": 65536 00:11:22.642 }, 00:11:22.642 { 00:11:22.642 "name": "BaseBdev2", 00:11:22.642 "uuid": "e909f0d4-0022-4ac9-a1d5-fb1d197e1c27", 00:11:22.642 "is_configured": true, 00:11:22.642 "data_offset": 0, 00:11:22.642 "data_size": 65536 00:11:22.642 }, 00:11:22.642 { 00:11:22.642 "name": "BaseBdev3", 00:11:22.642 "uuid": "297ff432-d422-44f1-b746-fc7457e2ce41", 00:11:22.642 "is_configured": true, 00:11:22.642 "data_offset": 0, 00:11:22.642 "data_size": 65536 00:11:22.642 }, 00:11:22.642 { 00:11:22.642 "name": "BaseBdev4", 00:11:22.642 "uuid": "93b679fc-4ccd-47a2-b71e-1bf50a4cece6", 00:11:22.642 "is_configured": true, 00:11:22.642 "data_offset": 0, 00:11:22.642 "data_size": 65536 00:11:22.642 } 00:11:22.642 ] 00:11:22.642 }' 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.642 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.209 [2024-11-05 03:22:36.754695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.209 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.467 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.467 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.467 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.467 03:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:23.467 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.467 03:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.467 [2024-11-05 03:22:36.918743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.467 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.467 [2024-11-05 03:22:37.062050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:23.467 [2024-11-05 03:22:37.062107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.726 BaseBdev2 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.726 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.727 [ 00:11:23.727 { 00:11:23.727 "name": "BaseBdev2", 00:11:23.727 "aliases": [ 00:11:23.727 "98c78de9-2657-4106-8e21-a7d441840758" 00:11:23.727 ], 00:11:23.727 "product_name": "Malloc disk", 00:11:23.727 "block_size": 512, 00:11:23.727 "num_blocks": 65536, 00:11:23.727 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:23.727 "assigned_rate_limits": { 00:11:23.727 "rw_ios_per_sec": 0, 00:11:23.727 "rw_mbytes_per_sec": 0, 00:11:23.727 "r_mbytes_per_sec": 0, 00:11:23.727 "w_mbytes_per_sec": 0 00:11:23.727 }, 00:11:23.727 "claimed": false, 00:11:23.727 "zoned": false, 00:11:23.727 "supported_io_types": { 00:11:23.727 "read": true, 00:11:23.727 "write": true, 00:11:23.727 "unmap": true, 00:11:23.727 "flush": true, 00:11:23.727 "reset": true, 00:11:23.727 "nvme_admin": false, 00:11:23.727 "nvme_io": false, 00:11:23.727 "nvme_io_md": false, 00:11:23.727 "write_zeroes": true, 00:11:23.727 "zcopy": true, 00:11:23.727 "get_zone_info": false, 00:11:23.727 "zone_management": false, 00:11:23.727 "zone_append": false, 00:11:23.727 "compare": false, 00:11:23.727 "compare_and_write": false, 00:11:23.727 "abort": true, 00:11:23.727 "seek_hole": false, 00:11:23.727 "seek_data": false, 00:11:23.727 "copy": true, 00:11:23.727 "nvme_iov_md": false 00:11:23.727 }, 00:11:23.727 "memory_domains": [ 00:11:23.727 { 00:11:23.727 "dma_device_id": "system", 00:11:23.727 "dma_device_type": 1 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.727 "dma_device_type": 2 00:11:23.727 } 00:11:23.727 ], 00:11:23.727 "driver_specific": {} 00:11:23.727 } 00:11:23.727 ] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.727 BaseBdev3 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.727 [ 00:11:23.727 { 00:11:23.727 "name": "BaseBdev3", 00:11:23.727 "aliases": [ 00:11:23.727 "32930862-2c98-41d9-b2c5-88f9ac9b6087" 00:11:23.727 ], 00:11:23.727 "product_name": "Malloc disk", 00:11:23.727 "block_size": 512, 00:11:23.727 "num_blocks": 65536, 00:11:23.727 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:23.727 "assigned_rate_limits": { 00:11:23.727 "rw_ios_per_sec": 0, 00:11:23.727 "rw_mbytes_per_sec": 0, 00:11:23.727 "r_mbytes_per_sec": 0, 00:11:23.727 "w_mbytes_per_sec": 0 00:11:23.727 }, 00:11:23.727 "claimed": false, 00:11:23.727 "zoned": false, 00:11:23.727 "supported_io_types": { 00:11:23.727 "read": true, 00:11:23.727 "write": true, 00:11:23.727 "unmap": true, 00:11:23.727 "flush": true, 00:11:23.727 "reset": true, 00:11:23.727 "nvme_admin": false, 00:11:23.727 "nvme_io": false, 00:11:23.727 "nvme_io_md": false, 00:11:23.727 "write_zeroes": true, 00:11:23.727 "zcopy": true, 00:11:23.727 "get_zone_info": false, 00:11:23.727 "zone_management": false, 00:11:23.727 "zone_append": false, 00:11:23.727 "compare": false, 00:11:23.727 "compare_and_write": false, 00:11:23.727 "abort": true, 00:11:23.727 "seek_hole": false, 00:11:23.727 "seek_data": false, 00:11:23.727 "copy": true, 00:11:23.727 "nvme_iov_md": false 00:11:23.727 }, 00:11:23.727 "memory_domains": [ 00:11:23.727 { 00:11:23.727 "dma_device_id": "system", 00:11:23.727 "dma_device_type": 1 00:11:23.727 }, 00:11:23.727 { 00:11:23.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.727 "dma_device_type": 2 00:11:23.727 } 00:11:23.727 ], 00:11:23.727 "driver_specific": {} 00:11:23.727 } 00:11:23.727 ] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.727 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.986 BaseBdev4 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.986 [ 00:11:23.986 { 00:11:23.986 "name": "BaseBdev4", 00:11:23.986 "aliases": [ 00:11:23.986 "052c8e69-6d31-49a6-837f-6fd3e0e68a3b" 00:11:23.986 ], 00:11:23.986 "product_name": "Malloc disk", 00:11:23.986 "block_size": 512, 00:11:23.986 "num_blocks": 65536, 00:11:23.986 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:23.986 "assigned_rate_limits": { 00:11:23.986 "rw_ios_per_sec": 0, 00:11:23.986 "rw_mbytes_per_sec": 0, 00:11:23.986 "r_mbytes_per_sec": 0, 00:11:23.986 "w_mbytes_per_sec": 0 00:11:23.986 }, 00:11:23.986 "claimed": false, 00:11:23.986 "zoned": false, 00:11:23.986 "supported_io_types": { 00:11:23.986 "read": true, 00:11:23.986 "write": true, 00:11:23.986 "unmap": true, 00:11:23.986 "flush": true, 00:11:23.986 "reset": true, 00:11:23.986 "nvme_admin": false, 00:11:23.986 "nvme_io": false, 00:11:23.986 "nvme_io_md": false, 00:11:23.986 "write_zeroes": true, 00:11:23.986 "zcopy": true, 00:11:23.986 "get_zone_info": false, 00:11:23.986 "zone_management": false, 00:11:23.986 "zone_append": false, 00:11:23.986 "compare": false, 00:11:23.986 "compare_and_write": false, 00:11:23.986 "abort": true, 00:11:23.986 "seek_hole": false, 00:11:23.986 "seek_data": false, 00:11:23.986 "copy": true, 00:11:23.986 "nvme_iov_md": false 00:11:23.986 }, 00:11:23.986 "memory_domains": [ 00:11:23.986 { 00:11:23.986 "dma_device_id": "system", 00:11:23.986 "dma_device_type": 1 00:11:23.986 }, 00:11:23.986 { 00:11:23.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.986 "dma_device_type": 2 00:11:23.986 } 00:11:23.986 ], 00:11:23.986 "driver_specific": {} 00:11:23.986 } 00:11:23.986 ] 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.986 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.986 [2024-11-05 03:22:37.433679] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.986 [2024-11-05 03:22:37.433879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.986 [2024-11-05 03:22:37.433934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.986 [2024-11-05 03:22:37.436376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.987 [2024-11-05 03:22:37.436450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.987 "name": "Existed_Raid", 00:11:23.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.987 "strip_size_kb": 64, 00:11:23.987 "state": "configuring", 00:11:23.987 "raid_level": "raid0", 00:11:23.987 "superblock": false, 00:11:23.987 "num_base_bdevs": 4, 00:11:23.987 "num_base_bdevs_discovered": 3, 00:11:23.987 "num_base_bdevs_operational": 4, 00:11:23.987 "base_bdevs_list": [ 00:11:23.987 { 00:11:23.987 "name": "BaseBdev1", 00:11:23.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.987 "is_configured": false, 00:11:23.987 "data_offset": 0, 00:11:23.987 "data_size": 0 00:11:23.987 }, 00:11:23.987 { 00:11:23.987 "name": "BaseBdev2", 00:11:23.987 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:23.987 "is_configured": true, 00:11:23.987 "data_offset": 0, 00:11:23.987 "data_size": 65536 00:11:23.987 }, 00:11:23.987 { 00:11:23.987 "name": "BaseBdev3", 00:11:23.987 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:23.987 "is_configured": true, 00:11:23.987 "data_offset": 0, 00:11:23.987 "data_size": 65536 00:11:23.987 }, 00:11:23.987 { 00:11:23.987 "name": "BaseBdev4", 00:11:23.987 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:23.987 "is_configured": true, 00:11:23.987 "data_offset": 0, 00:11:23.987 "data_size": 65536 00:11:23.987 } 00:11:23.987 ] 00:11:23.987 }' 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.987 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.554 [2024-11-05 03:22:37.949777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.554 03:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.555 03:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.555 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.555 "name": "Existed_Raid", 00:11:24.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.555 "strip_size_kb": 64, 00:11:24.555 "state": "configuring", 00:11:24.555 "raid_level": "raid0", 00:11:24.555 "superblock": false, 00:11:24.555 "num_base_bdevs": 4, 00:11:24.555 "num_base_bdevs_discovered": 2, 00:11:24.555 "num_base_bdevs_operational": 4, 00:11:24.555 "base_bdevs_list": [ 00:11:24.555 { 00:11:24.555 "name": "BaseBdev1", 00:11:24.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.555 "is_configured": false, 00:11:24.555 "data_offset": 0, 00:11:24.555 "data_size": 0 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "name": null, 00:11:24.555 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:24.555 "is_configured": false, 00:11:24.555 "data_offset": 0, 00:11:24.555 "data_size": 65536 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "name": "BaseBdev3", 00:11:24.555 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:24.555 "is_configured": true, 00:11:24.555 "data_offset": 0, 00:11:24.555 "data_size": 65536 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "name": "BaseBdev4", 00:11:24.555 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:24.555 "is_configured": true, 00:11:24.555 "data_offset": 0, 00:11:24.555 "data_size": 65536 00:11:24.555 } 00:11:24.555 ] 00:11:24.555 }' 00:11:24.555 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.555 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.122 [2024-11-05 03:22:38.579812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.122 BaseBdev1 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:25.122 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 [ 00:11:25.123 { 00:11:25.123 "name": "BaseBdev1", 00:11:25.123 "aliases": [ 00:11:25.123 "666cc4ff-d966-4406-97be-03358290788e" 00:11:25.123 ], 00:11:25.123 "product_name": "Malloc disk", 00:11:25.123 "block_size": 512, 00:11:25.123 "num_blocks": 65536, 00:11:25.123 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:25.123 "assigned_rate_limits": { 00:11:25.123 "rw_ios_per_sec": 0, 00:11:25.123 "rw_mbytes_per_sec": 0, 00:11:25.123 "r_mbytes_per_sec": 0, 00:11:25.123 "w_mbytes_per_sec": 0 00:11:25.123 }, 00:11:25.123 "claimed": true, 00:11:25.123 "claim_type": "exclusive_write", 00:11:25.123 "zoned": false, 00:11:25.123 "supported_io_types": { 00:11:25.123 "read": true, 00:11:25.123 "write": true, 00:11:25.123 "unmap": true, 00:11:25.123 "flush": true, 00:11:25.123 "reset": true, 00:11:25.123 "nvme_admin": false, 00:11:25.123 "nvme_io": false, 00:11:25.123 "nvme_io_md": false, 00:11:25.123 "write_zeroes": true, 00:11:25.123 "zcopy": true, 00:11:25.123 "get_zone_info": false, 00:11:25.123 "zone_management": false, 00:11:25.123 "zone_append": false, 00:11:25.123 "compare": false, 00:11:25.123 "compare_and_write": false, 00:11:25.123 "abort": true, 00:11:25.123 "seek_hole": false, 00:11:25.123 "seek_data": false, 00:11:25.123 "copy": true, 00:11:25.123 "nvme_iov_md": false 00:11:25.123 }, 00:11:25.123 "memory_domains": [ 00:11:25.123 { 00:11:25.123 "dma_device_id": "system", 00:11:25.123 "dma_device_type": 1 00:11:25.123 }, 00:11:25.123 { 00:11:25.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.123 "dma_device_type": 2 00:11:25.123 } 00:11:25.123 ], 00:11:25.123 "driver_specific": {} 00:11:25.123 } 00:11:25.123 ] 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.123 "name": "Existed_Raid", 00:11:25.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.123 "strip_size_kb": 64, 00:11:25.123 "state": "configuring", 00:11:25.123 "raid_level": "raid0", 00:11:25.123 "superblock": false, 00:11:25.123 "num_base_bdevs": 4, 00:11:25.123 "num_base_bdevs_discovered": 3, 00:11:25.123 "num_base_bdevs_operational": 4, 00:11:25.123 "base_bdevs_list": [ 00:11:25.123 { 00:11:25.123 "name": "BaseBdev1", 00:11:25.123 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:25.123 "is_configured": true, 00:11:25.123 "data_offset": 0, 00:11:25.123 "data_size": 65536 00:11:25.123 }, 00:11:25.123 { 00:11:25.123 "name": null, 00:11:25.123 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:25.123 "is_configured": false, 00:11:25.123 "data_offset": 0, 00:11:25.123 "data_size": 65536 00:11:25.123 }, 00:11:25.123 { 00:11:25.123 "name": "BaseBdev3", 00:11:25.123 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:25.123 "is_configured": true, 00:11:25.123 "data_offset": 0, 00:11:25.123 "data_size": 65536 00:11:25.123 }, 00:11:25.123 { 00:11:25.123 "name": "BaseBdev4", 00:11:25.123 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:25.123 "is_configured": true, 00:11:25.123 "data_offset": 0, 00:11:25.123 "data_size": 65536 00:11:25.123 } 00:11:25.123 ] 00:11:25.123 }' 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.123 03:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 [2024-11-05 03:22:39.200075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.691 "name": "Existed_Raid", 00:11:25.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.691 "strip_size_kb": 64, 00:11:25.691 "state": "configuring", 00:11:25.691 "raid_level": "raid0", 00:11:25.691 "superblock": false, 00:11:25.691 "num_base_bdevs": 4, 00:11:25.691 "num_base_bdevs_discovered": 2, 00:11:25.691 "num_base_bdevs_operational": 4, 00:11:25.691 "base_bdevs_list": [ 00:11:25.691 { 00:11:25.691 "name": "BaseBdev1", 00:11:25.691 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:25.691 "is_configured": true, 00:11:25.691 "data_offset": 0, 00:11:25.691 "data_size": 65536 00:11:25.691 }, 00:11:25.691 { 00:11:25.691 "name": null, 00:11:25.691 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:25.691 "is_configured": false, 00:11:25.691 "data_offset": 0, 00:11:25.691 "data_size": 65536 00:11:25.691 }, 00:11:25.691 { 00:11:25.691 "name": null, 00:11:25.691 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:25.691 "is_configured": false, 00:11:25.691 "data_offset": 0, 00:11:25.691 "data_size": 65536 00:11:25.691 }, 00:11:25.691 { 00:11:25.691 "name": "BaseBdev4", 00:11:25.691 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:25.691 "is_configured": true, 00:11:25.691 "data_offset": 0, 00:11:25.691 "data_size": 65536 00:11:25.691 } 00:11:25.691 ] 00:11:25.691 }' 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.691 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.259 [2024-11-05 03:22:39.796223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.259 "name": "Existed_Raid", 00:11:26.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.259 "strip_size_kb": 64, 00:11:26.259 "state": "configuring", 00:11:26.259 "raid_level": "raid0", 00:11:26.259 "superblock": false, 00:11:26.259 "num_base_bdevs": 4, 00:11:26.259 "num_base_bdevs_discovered": 3, 00:11:26.259 "num_base_bdevs_operational": 4, 00:11:26.259 "base_bdevs_list": [ 00:11:26.259 { 00:11:26.259 "name": "BaseBdev1", 00:11:26.259 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:26.259 "is_configured": true, 00:11:26.259 "data_offset": 0, 00:11:26.259 "data_size": 65536 00:11:26.259 }, 00:11:26.259 { 00:11:26.259 "name": null, 00:11:26.259 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:26.259 "is_configured": false, 00:11:26.259 "data_offset": 0, 00:11:26.259 "data_size": 65536 00:11:26.259 }, 00:11:26.259 { 00:11:26.259 "name": "BaseBdev3", 00:11:26.259 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:26.259 "is_configured": true, 00:11:26.259 "data_offset": 0, 00:11:26.259 "data_size": 65536 00:11:26.259 }, 00:11:26.259 { 00:11:26.259 "name": "BaseBdev4", 00:11:26.259 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:26.259 "is_configured": true, 00:11:26.259 "data_offset": 0, 00:11:26.259 "data_size": 65536 00:11:26.259 } 00:11:26.259 ] 00:11:26.259 }' 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.259 03:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 [2024-11-05 03:22:40.372480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.110 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.110 "name": "Existed_Raid", 00:11:27.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.110 "strip_size_kb": 64, 00:11:27.110 "state": "configuring", 00:11:27.110 "raid_level": "raid0", 00:11:27.110 "superblock": false, 00:11:27.110 "num_base_bdevs": 4, 00:11:27.110 "num_base_bdevs_discovered": 2, 00:11:27.110 "num_base_bdevs_operational": 4, 00:11:27.110 "base_bdevs_list": [ 00:11:27.110 { 00:11:27.110 "name": null, 00:11:27.110 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:27.110 "is_configured": false, 00:11:27.110 "data_offset": 0, 00:11:27.110 "data_size": 65536 00:11:27.110 }, 00:11:27.110 { 00:11:27.110 "name": null, 00:11:27.110 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:27.110 "is_configured": false, 00:11:27.110 "data_offset": 0, 00:11:27.110 "data_size": 65536 00:11:27.110 }, 00:11:27.110 { 00:11:27.110 "name": "BaseBdev3", 00:11:27.110 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:27.110 "is_configured": true, 00:11:27.110 "data_offset": 0, 00:11:27.110 "data_size": 65536 00:11:27.110 }, 00:11:27.110 { 00:11:27.110 "name": "BaseBdev4", 00:11:27.110 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:27.110 "is_configured": true, 00:11:27.110 "data_offset": 0, 00:11:27.110 "data_size": 65536 00:11:27.110 } 00:11:27.110 ] 00:11:27.110 }' 00:11:27.110 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.110 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.368 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.368 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.368 03:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.368 03:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.626 [2024-11-05 03:22:41.046408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.626 "name": "Existed_Raid", 00:11:27.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.626 "strip_size_kb": 64, 00:11:27.626 "state": "configuring", 00:11:27.626 "raid_level": "raid0", 00:11:27.626 "superblock": false, 00:11:27.626 "num_base_bdevs": 4, 00:11:27.626 "num_base_bdevs_discovered": 3, 00:11:27.626 "num_base_bdevs_operational": 4, 00:11:27.626 "base_bdevs_list": [ 00:11:27.626 { 00:11:27.626 "name": null, 00:11:27.626 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:27.626 "is_configured": false, 00:11:27.626 "data_offset": 0, 00:11:27.626 "data_size": 65536 00:11:27.626 }, 00:11:27.626 { 00:11:27.626 "name": "BaseBdev2", 00:11:27.626 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:27.626 "is_configured": true, 00:11:27.626 "data_offset": 0, 00:11:27.626 "data_size": 65536 00:11:27.626 }, 00:11:27.626 { 00:11:27.626 "name": "BaseBdev3", 00:11:27.626 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:27.626 "is_configured": true, 00:11:27.626 "data_offset": 0, 00:11:27.626 "data_size": 65536 00:11:27.626 }, 00:11:27.626 { 00:11:27.626 "name": "BaseBdev4", 00:11:27.626 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:27.626 "is_configured": true, 00:11:27.626 "data_offset": 0, 00:11:27.626 "data_size": 65536 00:11:27.626 } 00:11:27.626 ] 00:11:27.626 }' 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.626 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 666cc4ff-d966-4406-97be-03358290788e 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 [2024-11-05 03:22:41.706188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:28.194 [2024-11-05 03:22:41.706270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.194 [2024-11-05 03:22:41.706282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:28.194 [2024-11-05 03:22:41.706674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:28.194 [2024-11-05 03:22:41.706864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.194 [2024-11-05 03:22:41.706886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:28.194 [2024-11-05 03:22:41.707189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.194 NewBaseBdev 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 [ 00:11:28.194 { 00:11:28.194 "name": "NewBaseBdev", 00:11:28.194 "aliases": [ 00:11:28.194 "666cc4ff-d966-4406-97be-03358290788e" 00:11:28.194 ], 00:11:28.194 "product_name": "Malloc disk", 00:11:28.194 "block_size": 512, 00:11:28.194 "num_blocks": 65536, 00:11:28.194 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:28.194 "assigned_rate_limits": { 00:11:28.194 "rw_ios_per_sec": 0, 00:11:28.194 "rw_mbytes_per_sec": 0, 00:11:28.194 "r_mbytes_per_sec": 0, 00:11:28.194 "w_mbytes_per_sec": 0 00:11:28.194 }, 00:11:28.194 "claimed": true, 00:11:28.194 "claim_type": "exclusive_write", 00:11:28.194 "zoned": false, 00:11:28.194 "supported_io_types": { 00:11:28.194 "read": true, 00:11:28.194 "write": true, 00:11:28.194 "unmap": true, 00:11:28.194 "flush": true, 00:11:28.194 "reset": true, 00:11:28.194 "nvme_admin": false, 00:11:28.194 "nvme_io": false, 00:11:28.194 "nvme_io_md": false, 00:11:28.194 "write_zeroes": true, 00:11:28.194 "zcopy": true, 00:11:28.194 "get_zone_info": false, 00:11:28.194 "zone_management": false, 00:11:28.194 "zone_append": false, 00:11:28.194 "compare": false, 00:11:28.194 "compare_and_write": false, 00:11:28.194 "abort": true, 00:11:28.194 "seek_hole": false, 00:11:28.194 "seek_data": false, 00:11:28.194 "copy": true, 00:11:28.194 "nvme_iov_md": false 00:11:28.194 }, 00:11:28.194 "memory_domains": [ 00:11:28.194 { 00:11:28.194 "dma_device_id": "system", 00:11:28.194 "dma_device_type": 1 00:11:28.194 }, 00:11:28.194 { 00:11:28.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.194 "dma_device_type": 2 00:11:28.194 } 00:11:28.194 ], 00:11:28.194 "driver_specific": {} 00:11:28.194 } 00:11:28.194 ] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.194 "name": "Existed_Raid", 00:11:28.194 "uuid": "eb0102d9-dada-4704-bfc0-1f13e672f382", 00:11:28.194 "strip_size_kb": 64, 00:11:28.194 "state": "online", 00:11:28.194 "raid_level": "raid0", 00:11:28.194 "superblock": false, 00:11:28.194 "num_base_bdevs": 4, 00:11:28.194 "num_base_bdevs_discovered": 4, 00:11:28.194 "num_base_bdevs_operational": 4, 00:11:28.194 "base_bdevs_list": [ 00:11:28.194 { 00:11:28.194 "name": "NewBaseBdev", 00:11:28.194 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:28.194 "is_configured": true, 00:11:28.194 "data_offset": 0, 00:11:28.194 "data_size": 65536 00:11:28.194 }, 00:11:28.194 { 00:11:28.194 "name": "BaseBdev2", 00:11:28.194 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:28.194 "is_configured": true, 00:11:28.194 "data_offset": 0, 00:11:28.194 "data_size": 65536 00:11:28.194 }, 00:11:28.194 { 00:11:28.194 "name": "BaseBdev3", 00:11:28.194 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:28.194 "is_configured": true, 00:11:28.194 "data_offset": 0, 00:11:28.194 "data_size": 65536 00:11:28.194 }, 00:11:28.194 { 00:11:28.194 "name": "BaseBdev4", 00:11:28.194 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:28.194 "is_configured": true, 00:11:28.194 "data_offset": 0, 00:11:28.194 "data_size": 65536 00:11:28.194 } 00:11:28.194 ] 00:11:28.194 }' 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.194 03:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.762 [2024-11-05 03:22:42.258898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.762 "name": "Existed_Raid", 00:11:28.762 "aliases": [ 00:11:28.762 "eb0102d9-dada-4704-bfc0-1f13e672f382" 00:11:28.762 ], 00:11:28.762 "product_name": "Raid Volume", 00:11:28.762 "block_size": 512, 00:11:28.762 "num_blocks": 262144, 00:11:28.762 "uuid": "eb0102d9-dada-4704-bfc0-1f13e672f382", 00:11:28.762 "assigned_rate_limits": { 00:11:28.762 "rw_ios_per_sec": 0, 00:11:28.762 "rw_mbytes_per_sec": 0, 00:11:28.762 "r_mbytes_per_sec": 0, 00:11:28.762 "w_mbytes_per_sec": 0 00:11:28.762 }, 00:11:28.762 "claimed": false, 00:11:28.762 "zoned": false, 00:11:28.762 "supported_io_types": { 00:11:28.762 "read": true, 00:11:28.762 "write": true, 00:11:28.762 "unmap": true, 00:11:28.762 "flush": true, 00:11:28.762 "reset": true, 00:11:28.762 "nvme_admin": false, 00:11:28.762 "nvme_io": false, 00:11:28.762 "nvme_io_md": false, 00:11:28.762 "write_zeroes": true, 00:11:28.762 "zcopy": false, 00:11:28.762 "get_zone_info": false, 00:11:28.762 "zone_management": false, 00:11:28.762 "zone_append": false, 00:11:28.762 "compare": false, 00:11:28.762 "compare_and_write": false, 00:11:28.762 "abort": false, 00:11:28.762 "seek_hole": false, 00:11:28.762 "seek_data": false, 00:11:28.762 "copy": false, 00:11:28.762 "nvme_iov_md": false 00:11:28.762 }, 00:11:28.762 "memory_domains": [ 00:11:28.762 { 00:11:28.762 "dma_device_id": "system", 00:11:28.762 "dma_device_type": 1 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.762 "dma_device_type": 2 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "dma_device_id": "system", 00:11:28.762 "dma_device_type": 1 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.762 "dma_device_type": 2 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "dma_device_id": "system", 00:11:28.762 "dma_device_type": 1 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.762 "dma_device_type": 2 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "dma_device_id": "system", 00:11:28.762 "dma_device_type": 1 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.762 "dma_device_type": 2 00:11:28.762 } 00:11:28.762 ], 00:11:28.762 "driver_specific": { 00:11:28.762 "raid": { 00:11:28.762 "uuid": "eb0102d9-dada-4704-bfc0-1f13e672f382", 00:11:28.762 "strip_size_kb": 64, 00:11:28.762 "state": "online", 00:11:28.762 "raid_level": "raid0", 00:11:28.762 "superblock": false, 00:11:28.762 "num_base_bdevs": 4, 00:11:28.762 "num_base_bdevs_discovered": 4, 00:11:28.762 "num_base_bdevs_operational": 4, 00:11:28.762 "base_bdevs_list": [ 00:11:28.762 { 00:11:28.762 "name": "NewBaseBdev", 00:11:28.762 "uuid": "666cc4ff-d966-4406-97be-03358290788e", 00:11:28.762 "is_configured": true, 00:11:28.762 "data_offset": 0, 00:11:28.762 "data_size": 65536 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "name": "BaseBdev2", 00:11:28.762 "uuid": "98c78de9-2657-4106-8e21-a7d441840758", 00:11:28.762 "is_configured": true, 00:11:28.762 "data_offset": 0, 00:11:28.762 "data_size": 65536 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "name": "BaseBdev3", 00:11:28.762 "uuid": "32930862-2c98-41d9-b2c5-88f9ac9b6087", 00:11:28.762 "is_configured": true, 00:11:28.762 "data_offset": 0, 00:11:28.762 "data_size": 65536 00:11:28.762 }, 00:11:28.762 { 00:11:28.762 "name": "BaseBdev4", 00:11:28.762 "uuid": "052c8e69-6d31-49a6-837f-6fd3e0e68a3b", 00:11:28.762 "is_configured": true, 00:11:28.762 "data_offset": 0, 00:11:28.762 "data_size": 65536 00:11:28.762 } 00:11:28.762 ] 00:11:28.762 } 00:11:28.762 } 00:11:28.762 }' 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:28.762 BaseBdev2 00:11:28.762 BaseBdev3 00:11:28.762 BaseBdev4' 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.762 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.021 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.022 [2024-11-05 03:22:42.614507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.022 [2024-11-05 03:22:42.614555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.022 [2024-11-05 03:22:42.614651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.022 [2024-11-05 03:22:42.614738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.022 [2024-11-05 03:22:42.614755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69217 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69217 ']' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69217 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69217 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:29.022 killing process with pid 69217 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69217' 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69217 00:11:29.022 [2024-11-05 03:22:42.652929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.022 03:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69217 00:11:29.589 [2024-11-05 03:22:42.997757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.525 03:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:30.525 00:11:30.525 real 0m12.821s 00:11:30.525 user 0m21.439s 00:11:30.525 sys 0m1.711s 00:11:30.525 03:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.525 ************************************ 00:11:30.525 END TEST raid_state_function_test 00:11:30.525 ************************************ 00:11:30.525 03:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.525 03:22:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:30.525 03:22:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:30.525 03:22:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.525 03:22:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.525 ************************************ 00:11:30.525 START TEST raid_state_function_test_sb 00:11:30.525 ************************************ 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69905 00:11:30.525 Process raid pid: 69905 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69905' 00:11:30.525 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69905 00:11:30.526 03:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 69905 ']' 00:11:30.526 03:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:30.526 03:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.526 03:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:30.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.526 03:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.526 03:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:30.526 03:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.784 [2024-11-05 03:22:44.163902] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:30.784 [2024-11-05 03:22:44.164096] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.784 [2024-11-05 03:22:44.350975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.043 [2024-11-05 03:22:44.472155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.302 [2024-11-05 03:22:44.680550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.302 [2024-11-05 03:22:44.680593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.561 [2024-11-05 03:22:45.148417] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.561 [2024-11-05 03:22:45.148499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.561 [2024-11-05 03:22:45.148516] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.561 [2024-11-05 03:22:45.148532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.561 [2024-11-05 03:22:45.148542] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.561 [2024-11-05 03:22:45.148556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.561 [2024-11-05 03:22:45.148567] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.561 [2024-11-05 03:22:45.148581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.561 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.820 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.820 "name": "Existed_Raid", 00:11:31.820 "uuid": "6c71e834-4cdb-46e3-89a8-2100d6f50634", 00:11:31.820 "strip_size_kb": 64, 00:11:31.820 "state": "configuring", 00:11:31.820 "raid_level": "raid0", 00:11:31.820 "superblock": true, 00:11:31.820 "num_base_bdevs": 4, 00:11:31.820 "num_base_bdevs_discovered": 0, 00:11:31.820 "num_base_bdevs_operational": 4, 00:11:31.820 "base_bdevs_list": [ 00:11:31.820 { 00:11:31.820 "name": "BaseBdev1", 00:11:31.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.820 "is_configured": false, 00:11:31.820 "data_offset": 0, 00:11:31.820 "data_size": 0 00:11:31.820 }, 00:11:31.820 { 00:11:31.820 "name": "BaseBdev2", 00:11:31.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.820 "is_configured": false, 00:11:31.820 "data_offset": 0, 00:11:31.820 "data_size": 0 00:11:31.820 }, 00:11:31.820 { 00:11:31.820 "name": "BaseBdev3", 00:11:31.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.820 "is_configured": false, 00:11:31.820 "data_offset": 0, 00:11:31.820 "data_size": 0 00:11:31.820 }, 00:11:31.820 { 00:11:31.820 "name": "BaseBdev4", 00:11:31.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.820 "is_configured": false, 00:11:31.820 "data_offset": 0, 00:11:31.820 "data_size": 0 00:11:31.820 } 00:11:31.820 ] 00:11:31.820 }' 00:11:31.820 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.820 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.079 [2024-11-05 03:22:45.676473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.079 [2024-11-05 03:22:45.676530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.079 [2024-11-05 03:22:45.684475] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.079 [2024-11-05 03:22:45.684538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.079 [2024-11-05 03:22:45.684552] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.079 [2024-11-05 03:22:45.684567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.079 [2024-11-05 03:22:45.684577] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.079 [2024-11-05 03:22:45.684590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.079 [2024-11-05 03:22:45.684600] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.079 [2024-11-05 03:22:45.684614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.079 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 [2024-11-05 03:22:45.728788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.338 BaseBdev1 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 [ 00:11:32.338 { 00:11:32.338 "name": "BaseBdev1", 00:11:32.338 "aliases": [ 00:11:32.338 "c5016bb4-3086-412d-b770-704849c3657d" 00:11:32.338 ], 00:11:32.338 "product_name": "Malloc disk", 00:11:32.338 "block_size": 512, 00:11:32.338 "num_blocks": 65536, 00:11:32.338 "uuid": "c5016bb4-3086-412d-b770-704849c3657d", 00:11:32.338 "assigned_rate_limits": { 00:11:32.338 "rw_ios_per_sec": 0, 00:11:32.338 "rw_mbytes_per_sec": 0, 00:11:32.338 "r_mbytes_per_sec": 0, 00:11:32.338 "w_mbytes_per_sec": 0 00:11:32.338 }, 00:11:32.338 "claimed": true, 00:11:32.338 "claim_type": "exclusive_write", 00:11:32.338 "zoned": false, 00:11:32.338 "supported_io_types": { 00:11:32.338 "read": true, 00:11:32.338 "write": true, 00:11:32.338 "unmap": true, 00:11:32.338 "flush": true, 00:11:32.338 "reset": true, 00:11:32.338 "nvme_admin": false, 00:11:32.338 "nvme_io": false, 00:11:32.338 "nvme_io_md": false, 00:11:32.338 "write_zeroes": true, 00:11:32.338 "zcopy": true, 00:11:32.338 "get_zone_info": false, 00:11:32.338 "zone_management": false, 00:11:32.338 "zone_append": false, 00:11:32.338 "compare": false, 00:11:32.338 "compare_and_write": false, 00:11:32.338 "abort": true, 00:11:32.338 "seek_hole": false, 00:11:32.338 "seek_data": false, 00:11:32.338 "copy": true, 00:11:32.338 "nvme_iov_md": false 00:11:32.338 }, 00:11:32.338 "memory_domains": [ 00:11:32.338 { 00:11:32.338 "dma_device_id": "system", 00:11:32.338 "dma_device_type": 1 00:11:32.338 }, 00:11:32.338 { 00:11:32.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.338 "dma_device_type": 2 00:11:32.338 } 00:11:32.338 ], 00:11:32.338 "driver_specific": {} 00:11:32.338 } 00:11:32.338 ] 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.338 "name": "Existed_Raid", 00:11:32.338 "uuid": "d6e541cc-b8a8-4d9b-87dd-15cd8eb6c49f", 00:11:32.338 "strip_size_kb": 64, 00:11:32.338 "state": "configuring", 00:11:32.338 "raid_level": "raid0", 00:11:32.338 "superblock": true, 00:11:32.338 "num_base_bdevs": 4, 00:11:32.338 "num_base_bdevs_discovered": 1, 00:11:32.338 "num_base_bdevs_operational": 4, 00:11:32.338 "base_bdevs_list": [ 00:11:32.338 { 00:11:32.338 "name": "BaseBdev1", 00:11:32.338 "uuid": "c5016bb4-3086-412d-b770-704849c3657d", 00:11:32.338 "is_configured": true, 00:11:32.338 "data_offset": 2048, 00:11:32.338 "data_size": 63488 00:11:32.338 }, 00:11:32.338 { 00:11:32.338 "name": "BaseBdev2", 00:11:32.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.338 "is_configured": false, 00:11:32.338 "data_offset": 0, 00:11:32.338 "data_size": 0 00:11:32.338 }, 00:11:32.338 { 00:11:32.338 "name": "BaseBdev3", 00:11:32.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.338 "is_configured": false, 00:11:32.338 "data_offset": 0, 00:11:32.338 "data_size": 0 00:11:32.338 }, 00:11:32.338 { 00:11:32.338 "name": "BaseBdev4", 00:11:32.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.338 "is_configured": false, 00:11:32.338 "data_offset": 0, 00:11:32.338 "data_size": 0 00:11:32.338 } 00:11:32.338 ] 00:11:32.338 }' 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.338 03:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 [2024-11-05 03:22:46.272981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.905 [2024-11-05 03:22:46.273058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 [2024-11-05 03:22:46.281047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.905 [2024-11-05 03:22:46.283407] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.905 [2024-11-05 03:22:46.283464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.905 [2024-11-05 03:22:46.283480] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.905 [2024-11-05 03:22:46.283498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.905 [2024-11-05 03:22:46.283508] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.905 [2024-11-05 03:22:46.283522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.905 "name": "Existed_Raid", 00:11:32.905 "uuid": "a5bb9456-b90d-48a1-a94c-0f9389632f75", 00:11:32.905 "strip_size_kb": 64, 00:11:32.905 "state": "configuring", 00:11:32.905 "raid_level": "raid0", 00:11:32.905 "superblock": true, 00:11:32.905 "num_base_bdevs": 4, 00:11:32.905 "num_base_bdevs_discovered": 1, 00:11:32.905 "num_base_bdevs_operational": 4, 00:11:32.905 "base_bdevs_list": [ 00:11:32.905 { 00:11:32.905 "name": "BaseBdev1", 00:11:32.905 "uuid": "c5016bb4-3086-412d-b770-704849c3657d", 00:11:32.905 "is_configured": true, 00:11:32.905 "data_offset": 2048, 00:11:32.905 "data_size": 63488 00:11:32.905 }, 00:11:32.905 { 00:11:32.905 "name": "BaseBdev2", 00:11:32.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.905 "is_configured": false, 00:11:32.905 "data_offset": 0, 00:11:32.905 "data_size": 0 00:11:32.905 }, 00:11:32.905 { 00:11:32.905 "name": "BaseBdev3", 00:11:32.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.905 "is_configured": false, 00:11:32.905 "data_offset": 0, 00:11:32.905 "data_size": 0 00:11:32.905 }, 00:11:32.905 { 00:11:32.905 "name": "BaseBdev4", 00:11:32.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.905 "is_configured": false, 00:11:32.905 "data_offset": 0, 00:11:32.905 "data_size": 0 00:11:32.905 } 00:11:32.905 ] 00:11:32.905 }' 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.905 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 [2024-11-05 03:22:46.859883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.472 BaseBdev2 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 [ 00:11:33.472 { 00:11:33.472 "name": "BaseBdev2", 00:11:33.472 "aliases": [ 00:11:33.472 "9c28fba5-3382-4878-88c8-46f025ff2a9a" 00:11:33.472 ], 00:11:33.472 "product_name": "Malloc disk", 00:11:33.472 "block_size": 512, 00:11:33.472 "num_blocks": 65536, 00:11:33.472 "uuid": "9c28fba5-3382-4878-88c8-46f025ff2a9a", 00:11:33.472 "assigned_rate_limits": { 00:11:33.472 "rw_ios_per_sec": 0, 00:11:33.472 "rw_mbytes_per_sec": 0, 00:11:33.472 "r_mbytes_per_sec": 0, 00:11:33.472 "w_mbytes_per_sec": 0 00:11:33.472 }, 00:11:33.472 "claimed": true, 00:11:33.472 "claim_type": "exclusive_write", 00:11:33.472 "zoned": false, 00:11:33.472 "supported_io_types": { 00:11:33.472 "read": true, 00:11:33.472 "write": true, 00:11:33.472 "unmap": true, 00:11:33.472 "flush": true, 00:11:33.472 "reset": true, 00:11:33.472 "nvme_admin": false, 00:11:33.472 "nvme_io": false, 00:11:33.472 "nvme_io_md": false, 00:11:33.472 "write_zeroes": true, 00:11:33.472 "zcopy": true, 00:11:33.472 "get_zone_info": false, 00:11:33.472 "zone_management": false, 00:11:33.472 "zone_append": false, 00:11:33.472 "compare": false, 00:11:33.472 "compare_and_write": false, 00:11:33.472 "abort": true, 00:11:33.472 "seek_hole": false, 00:11:33.472 "seek_data": false, 00:11:33.472 "copy": true, 00:11:33.472 "nvme_iov_md": false 00:11:33.472 }, 00:11:33.472 "memory_domains": [ 00:11:33.472 { 00:11:33.472 "dma_device_id": "system", 00:11:33.472 "dma_device_type": 1 00:11:33.472 }, 00:11:33.472 { 00:11:33.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.472 "dma_device_type": 2 00:11:33.472 } 00:11:33.472 ], 00:11:33.472 "driver_specific": {} 00:11:33.472 } 00:11:33.472 ] 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.472 "name": "Existed_Raid", 00:11:33.472 "uuid": "a5bb9456-b90d-48a1-a94c-0f9389632f75", 00:11:33.472 "strip_size_kb": 64, 00:11:33.472 "state": "configuring", 00:11:33.472 "raid_level": "raid0", 00:11:33.472 "superblock": true, 00:11:33.472 "num_base_bdevs": 4, 00:11:33.472 "num_base_bdevs_discovered": 2, 00:11:33.472 "num_base_bdevs_operational": 4, 00:11:33.472 "base_bdevs_list": [ 00:11:33.472 { 00:11:33.472 "name": "BaseBdev1", 00:11:33.472 "uuid": "c5016bb4-3086-412d-b770-704849c3657d", 00:11:33.472 "is_configured": true, 00:11:33.472 "data_offset": 2048, 00:11:33.472 "data_size": 63488 00:11:33.472 }, 00:11:33.472 { 00:11:33.472 "name": "BaseBdev2", 00:11:33.472 "uuid": "9c28fba5-3382-4878-88c8-46f025ff2a9a", 00:11:33.472 "is_configured": true, 00:11:33.472 "data_offset": 2048, 00:11:33.472 "data_size": 63488 00:11:33.472 }, 00:11:33.472 { 00:11:33.472 "name": "BaseBdev3", 00:11:33.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.472 "is_configured": false, 00:11:33.472 "data_offset": 0, 00:11:33.472 "data_size": 0 00:11:33.472 }, 00:11:33.472 { 00:11:33.472 "name": "BaseBdev4", 00:11:33.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.472 "is_configured": false, 00:11:33.472 "data_offset": 0, 00:11:33.472 "data_size": 0 00:11:33.472 } 00:11:33.472 ] 00:11:33.472 }' 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.472 03:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.038 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:34.038 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.039 [2024-11-05 03:22:47.463710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.039 BaseBdev3 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.039 [ 00:11:34.039 { 00:11:34.039 "name": "BaseBdev3", 00:11:34.039 "aliases": [ 00:11:34.039 "d03444ce-00e0-4e89-a0f9-acd169e89c95" 00:11:34.039 ], 00:11:34.039 "product_name": "Malloc disk", 00:11:34.039 "block_size": 512, 00:11:34.039 "num_blocks": 65536, 00:11:34.039 "uuid": "d03444ce-00e0-4e89-a0f9-acd169e89c95", 00:11:34.039 "assigned_rate_limits": { 00:11:34.039 "rw_ios_per_sec": 0, 00:11:34.039 "rw_mbytes_per_sec": 0, 00:11:34.039 "r_mbytes_per_sec": 0, 00:11:34.039 "w_mbytes_per_sec": 0 00:11:34.039 }, 00:11:34.039 "claimed": true, 00:11:34.039 "claim_type": "exclusive_write", 00:11:34.039 "zoned": false, 00:11:34.039 "supported_io_types": { 00:11:34.039 "read": true, 00:11:34.039 "write": true, 00:11:34.039 "unmap": true, 00:11:34.039 "flush": true, 00:11:34.039 "reset": true, 00:11:34.039 "nvme_admin": false, 00:11:34.039 "nvme_io": false, 00:11:34.039 "nvme_io_md": false, 00:11:34.039 "write_zeroes": true, 00:11:34.039 "zcopy": true, 00:11:34.039 "get_zone_info": false, 00:11:34.039 "zone_management": false, 00:11:34.039 "zone_append": false, 00:11:34.039 "compare": false, 00:11:34.039 "compare_and_write": false, 00:11:34.039 "abort": true, 00:11:34.039 "seek_hole": false, 00:11:34.039 "seek_data": false, 00:11:34.039 "copy": true, 00:11:34.039 "nvme_iov_md": false 00:11:34.039 }, 00:11:34.039 "memory_domains": [ 00:11:34.039 { 00:11:34.039 "dma_device_id": "system", 00:11:34.039 "dma_device_type": 1 00:11:34.039 }, 00:11:34.039 { 00:11:34.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.039 "dma_device_type": 2 00:11:34.039 } 00:11:34.039 ], 00:11:34.039 "driver_specific": {} 00:11:34.039 } 00:11:34.039 ] 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.039 "name": "Existed_Raid", 00:11:34.039 "uuid": "a5bb9456-b90d-48a1-a94c-0f9389632f75", 00:11:34.039 "strip_size_kb": 64, 00:11:34.039 "state": "configuring", 00:11:34.039 "raid_level": "raid0", 00:11:34.039 "superblock": true, 00:11:34.039 "num_base_bdevs": 4, 00:11:34.039 "num_base_bdevs_discovered": 3, 00:11:34.039 "num_base_bdevs_operational": 4, 00:11:34.039 "base_bdevs_list": [ 00:11:34.039 { 00:11:34.039 "name": "BaseBdev1", 00:11:34.039 "uuid": "c5016bb4-3086-412d-b770-704849c3657d", 00:11:34.039 "is_configured": true, 00:11:34.039 "data_offset": 2048, 00:11:34.039 "data_size": 63488 00:11:34.039 }, 00:11:34.039 { 00:11:34.039 "name": "BaseBdev2", 00:11:34.039 "uuid": "9c28fba5-3382-4878-88c8-46f025ff2a9a", 00:11:34.039 "is_configured": true, 00:11:34.039 "data_offset": 2048, 00:11:34.039 "data_size": 63488 00:11:34.039 }, 00:11:34.039 { 00:11:34.039 "name": "BaseBdev3", 00:11:34.039 "uuid": "d03444ce-00e0-4e89-a0f9-acd169e89c95", 00:11:34.039 "is_configured": true, 00:11:34.039 "data_offset": 2048, 00:11:34.039 "data_size": 63488 00:11:34.039 }, 00:11:34.039 { 00:11:34.039 "name": "BaseBdev4", 00:11:34.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.039 "is_configured": false, 00:11:34.039 "data_offset": 0, 00:11:34.039 "data_size": 0 00:11:34.039 } 00:11:34.039 ] 00:11:34.039 }' 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.039 03:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.605 [2024-11-05 03:22:48.065496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:34.605 [2024-11-05 03:22:48.065829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.605 [2024-11-05 03:22:48.065850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:34.605 BaseBdev4 00:11:34.605 [2024-11-05 03:22:48.066213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.605 [2024-11-05 03:22:48.066459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.605 [2024-11-05 03:22:48.066490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.605 [2024-11-05 03:22:48.066664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.605 [ 00:11:34.605 { 00:11:34.605 "name": "BaseBdev4", 00:11:34.605 "aliases": [ 00:11:34.605 "9e74fecc-5fc6-40e5-94b7-834adf111202" 00:11:34.605 ], 00:11:34.605 "product_name": "Malloc disk", 00:11:34.605 "block_size": 512, 00:11:34.605 "num_blocks": 65536, 00:11:34.605 "uuid": "9e74fecc-5fc6-40e5-94b7-834adf111202", 00:11:34.605 "assigned_rate_limits": { 00:11:34.605 "rw_ios_per_sec": 0, 00:11:34.605 "rw_mbytes_per_sec": 0, 00:11:34.605 "r_mbytes_per_sec": 0, 00:11:34.605 "w_mbytes_per_sec": 0 00:11:34.605 }, 00:11:34.605 "claimed": true, 00:11:34.605 "claim_type": "exclusive_write", 00:11:34.605 "zoned": false, 00:11:34.605 "supported_io_types": { 00:11:34.605 "read": true, 00:11:34.605 "write": true, 00:11:34.605 "unmap": true, 00:11:34.605 "flush": true, 00:11:34.605 "reset": true, 00:11:34.605 "nvme_admin": false, 00:11:34.605 "nvme_io": false, 00:11:34.605 "nvme_io_md": false, 00:11:34.605 "write_zeroes": true, 00:11:34.605 "zcopy": true, 00:11:34.605 "get_zone_info": false, 00:11:34.605 "zone_management": false, 00:11:34.605 "zone_append": false, 00:11:34.605 "compare": false, 00:11:34.605 "compare_and_write": false, 00:11:34.605 "abort": true, 00:11:34.605 "seek_hole": false, 00:11:34.605 "seek_data": false, 00:11:34.605 "copy": true, 00:11:34.605 "nvme_iov_md": false 00:11:34.605 }, 00:11:34.605 "memory_domains": [ 00:11:34.605 { 00:11:34.605 "dma_device_id": "system", 00:11:34.605 "dma_device_type": 1 00:11:34.605 }, 00:11:34.605 { 00:11:34.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.605 "dma_device_type": 2 00:11:34.605 } 00:11:34.605 ], 00:11:34.605 "driver_specific": {} 00:11:34.605 } 00:11:34.605 ] 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.605 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.605 "name": "Existed_Raid", 00:11:34.605 "uuid": "a5bb9456-b90d-48a1-a94c-0f9389632f75", 00:11:34.605 "strip_size_kb": 64, 00:11:34.605 "state": "online", 00:11:34.605 "raid_level": "raid0", 00:11:34.605 "superblock": true, 00:11:34.605 "num_base_bdevs": 4, 00:11:34.605 "num_base_bdevs_discovered": 4, 00:11:34.605 "num_base_bdevs_operational": 4, 00:11:34.605 "base_bdevs_list": [ 00:11:34.605 { 00:11:34.605 "name": "BaseBdev1", 00:11:34.605 "uuid": "c5016bb4-3086-412d-b770-704849c3657d", 00:11:34.605 "is_configured": true, 00:11:34.605 "data_offset": 2048, 00:11:34.605 "data_size": 63488 00:11:34.605 }, 00:11:34.605 { 00:11:34.605 "name": "BaseBdev2", 00:11:34.605 "uuid": "9c28fba5-3382-4878-88c8-46f025ff2a9a", 00:11:34.605 "is_configured": true, 00:11:34.605 "data_offset": 2048, 00:11:34.605 "data_size": 63488 00:11:34.605 }, 00:11:34.605 { 00:11:34.605 "name": "BaseBdev3", 00:11:34.605 "uuid": "d03444ce-00e0-4e89-a0f9-acd169e89c95", 00:11:34.605 "is_configured": true, 00:11:34.605 "data_offset": 2048, 00:11:34.605 "data_size": 63488 00:11:34.605 }, 00:11:34.605 { 00:11:34.605 "name": "BaseBdev4", 00:11:34.605 "uuid": "9e74fecc-5fc6-40e5-94b7-834adf111202", 00:11:34.605 "is_configured": true, 00:11:34.605 "data_offset": 2048, 00:11:34.605 "data_size": 63488 00:11:34.606 } 00:11:34.606 ] 00:11:34.606 }' 00:11:34.606 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.606 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.172 [2024-11-05 03:22:48.634186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.172 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.172 "name": "Existed_Raid", 00:11:35.172 "aliases": [ 00:11:35.172 "a5bb9456-b90d-48a1-a94c-0f9389632f75" 00:11:35.172 ], 00:11:35.172 "product_name": "Raid Volume", 00:11:35.172 "block_size": 512, 00:11:35.172 "num_blocks": 253952, 00:11:35.172 "uuid": "a5bb9456-b90d-48a1-a94c-0f9389632f75", 00:11:35.172 "assigned_rate_limits": { 00:11:35.172 "rw_ios_per_sec": 0, 00:11:35.172 "rw_mbytes_per_sec": 0, 00:11:35.172 "r_mbytes_per_sec": 0, 00:11:35.172 "w_mbytes_per_sec": 0 00:11:35.172 }, 00:11:35.172 "claimed": false, 00:11:35.172 "zoned": false, 00:11:35.172 "supported_io_types": { 00:11:35.172 "read": true, 00:11:35.172 "write": true, 00:11:35.172 "unmap": true, 00:11:35.172 "flush": true, 00:11:35.172 "reset": true, 00:11:35.172 "nvme_admin": false, 00:11:35.172 "nvme_io": false, 00:11:35.172 "nvme_io_md": false, 00:11:35.172 "write_zeroes": true, 00:11:35.172 "zcopy": false, 00:11:35.172 "get_zone_info": false, 00:11:35.172 "zone_management": false, 00:11:35.172 "zone_append": false, 00:11:35.172 "compare": false, 00:11:35.172 "compare_and_write": false, 00:11:35.172 "abort": false, 00:11:35.172 "seek_hole": false, 00:11:35.172 "seek_data": false, 00:11:35.172 "copy": false, 00:11:35.172 "nvme_iov_md": false 00:11:35.172 }, 00:11:35.172 "memory_domains": [ 00:11:35.172 { 00:11:35.172 "dma_device_id": "system", 00:11:35.172 "dma_device_type": 1 00:11:35.172 }, 00:11:35.172 { 00:11:35.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.173 "dma_device_type": 2 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "dma_device_id": "system", 00:11:35.173 "dma_device_type": 1 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.173 "dma_device_type": 2 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "dma_device_id": "system", 00:11:35.173 "dma_device_type": 1 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.173 "dma_device_type": 2 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "dma_device_id": "system", 00:11:35.173 "dma_device_type": 1 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.173 "dma_device_type": 2 00:11:35.173 } 00:11:35.173 ], 00:11:35.173 "driver_specific": { 00:11:35.173 "raid": { 00:11:35.173 "uuid": "a5bb9456-b90d-48a1-a94c-0f9389632f75", 00:11:35.173 "strip_size_kb": 64, 00:11:35.173 "state": "online", 00:11:35.173 "raid_level": "raid0", 00:11:35.173 "superblock": true, 00:11:35.173 "num_base_bdevs": 4, 00:11:35.173 "num_base_bdevs_discovered": 4, 00:11:35.173 "num_base_bdevs_operational": 4, 00:11:35.173 "base_bdevs_list": [ 00:11:35.173 { 00:11:35.173 "name": "BaseBdev1", 00:11:35.173 "uuid": "c5016bb4-3086-412d-b770-704849c3657d", 00:11:35.173 "is_configured": true, 00:11:35.173 "data_offset": 2048, 00:11:35.173 "data_size": 63488 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "name": "BaseBdev2", 00:11:35.173 "uuid": "9c28fba5-3382-4878-88c8-46f025ff2a9a", 00:11:35.173 "is_configured": true, 00:11:35.173 "data_offset": 2048, 00:11:35.173 "data_size": 63488 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "name": "BaseBdev3", 00:11:35.173 "uuid": "d03444ce-00e0-4e89-a0f9-acd169e89c95", 00:11:35.173 "is_configured": true, 00:11:35.173 "data_offset": 2048, 00:11:35.173 "data_size": 63488 00:11:35.173 }, 00:11:35.173 { 00:11:35.173 "name": "BaseBdev4", 00:11:35.173 "uuid": "9e74fecc-5fc6-40e5-94b7-834adf111202", 00:11:35.173 "is_configured": true, 00:11:35.173 "data_offset": 2048, 00:11:35.173 "data_size": 63488 00:11:35.173 } 00:11:35.173 ] 00:11:35.173 } 00:11:35.173 } 00:11:35.173 }' 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:35.173 BaseBdev2 00:11:35.173 BaseBdev3 00:11:35.173 BaseBdev4' 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.173 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.431 03:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.431 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.431 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.431 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:35.431 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.431 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.431 [2024-11-05 03:22:49.021916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.431 [2024-11-05 03:22:49.021959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.431 [2024-11-05 03:22:49.022033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.688 "name": "Existed_Raid", 00:11:35.688 "uuid": "a5bb9456-b90d-48a1-a94c-0f9389632f75", 00:11:35.688 "strip_size_kb": 64, 00:11:35.688 "state": "offline", 00:11:35.688 "raid_level": "raid0", 00:11:35.688 "superblock": true, 00:11:35.688 "num_base_bdevs": 4, 00:11:35.688 "num_base_bdevs_discovered": 3, 00:11:35.688 "num_base_bdevs_operational": 3, 00:11:35.688 "base_bdevs_list": [ 00:11:35.688 { 00:11:35.688 "name": null, 00:11:35.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.688 "is_configured": false, 00:11:35.688 "data_offset": 0, 00:11:35.688 "data_size": 63488 00:11:35.688 }, 00:11:35.688 { 00:11:35.688 "name": "BaseBdev2", 00:11:35.688 "uuid": "9c28fba5-3382-4878-88c8-46f025ff2a9a", 00:11:35.688 "is_configured": true, 00:11:35.688 "data_offset": 2048, 00:11:35.688 "data_size": 63488 00:11:35.688 }, 00:11:35.688 { 00:11:35.688 "name": "BaseBdev3", 00:11:35.688 "uuid": "d03444ce-00e0-4e89-a0f9-acd169e89c95", 00:11:35.688 "is_configured": true, 00:11:35.688 "data_offset": 2048, 00:11:35.688 "data_size": 63488 00:11:35.688 }, 00:11:35.688 { 00:11:35.688 "name": "BaseBdev4", 00:11:35.688 "uuid": "9e74fecc-5fc6-40e5-94b7-834adf111202", 00:11:35.688 "is_configured": true, 00:11:35.688 "data_offset": 2048, 00:11:35.688 "data_size": 63488 00:11:35.688 } 00:11:35.688 ] 00:11:35.688 }' 00:11:35.688 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.689 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.253 [2024-11-05 03:22:49.668112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.253 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.253 [2024-11-05 03:22:49.812928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.511 03:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.511 [2024-11-05 03:22:49.971745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:36.511 [2024-11-05 03:22:49.971808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.511 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.770 BaseBdev2 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.770 [ 00:11:36.770 { 00:11:36.770 "name": "BaseBdev2", 00:11:36.770 "aliases": [ 00:11:36.770 "13d28ddc-350f-4927-a96d-4ffcb9ad19a2" 00:11:36.770 ], 00:11:36.770 "product_name": "Malloc disk", 00:11:36.770 "block_size": 512, 00:11:36.770 "num_blocks": 65536, 00:11:36.770 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:36.770 "assigned_rate_limits": { 00:11:36.770 "rw_ios_per_sec": 0, 00:11:36.770 "rw_mbytes_per_sec": 0, 00:11:36.770 "r_mbytes_per_sec": 0, 00:11:36.770 "w_mbytes_per_sec": 0 00:11:36.770 }, 00:11:36.770 "claimed": false, 00:11:36.770 "zoned": false, 00:11:36.770 "supported_io_types": { 00:11:36.770 "read": true, 00:11:36.770 "write": true, 00:11:36.770 "unmap": true, 00:11:36.770 "flush": true, 00:11:36.770 "reset": true, 00:11:36.770 "nvme_admin": false, 00:11:36.770 "nvme_io": false, 00:11:36.770 "nvme_io_md": false, 00:11:36.770 "write_zeroes": true, 00:11:36.770 "zcopy": true, 00:11:36.770 "get_zone_info": false, 00:11:36.770 "zone_management": false, 00:11:36.770 "zone_append": false, 00:11:36.770 "compare": false, 00:11:36.770 "compare_and_write": false, 00:11:36.770 "abort": true, 00:11:36.770 "seek_hole": false, 00:11:36.770 "seek_data": false, 00:11:36.770 "copy": true, 00:11:36.770 "nvme_iov_md": false 00:11:36.770 }, 00:11:36.770 "memory_domains": [ 00:11:36.770 { 00:11:36.770 "dma_device_id": "system", 00:11:36.770 "dma_device_type": 1 00:11:36.770 }, 00:11:36.770 { 00:11:36.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.770 "dma_device_type": 2 00:11:36.770 } 00:11:36.770 ], 00:11:36.770 "driver_specific": {} 00:11:36.770 } 00:11:36.770 ] 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.770 BaseBdev3 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.770 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.771 [ 00:11:36.771 { 00:11:36.771 "name": "BaseBdev3", 00:11:36.771 "aliases": [ 00:11:36.771 "b648affe-9207-490a-b67a-c2ca696dae26" 00:11:36.771 ], 00:11:36.771 "product_name": "Malloc disk", 00:11:36.771 "block_size": 512, 00:11:36.771 "num_blocks": 65536, 00:11:36.771 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:36.771 "assigned_rate_limits": { 00:11:36.771 "rw_ios_per_sec": 0, 00:11:36.771 "rw_mbytes_per_sec": 0, 00:11:36.771 "r_mbytes_per_sec": 0, 00:11:36.771 "w_mbytes_per_sec": 0 00:11:36.771 }, 00:11:36.771 "claimed": false, 00:11:36.771 "zoned": false, 00:11:36.771 "supported_io_types": { 00:11:36.771 "read": true, 00:11:36.771 "write": true, 00:11:36.771 "unmap": true, 00:11:36.771 "flush": true, 00:11:36.771 "reset": true, 00:11:36.771 "nvme_admin": false, 00:11:36.771 "nvme_io": false, 00:11:36.771 "nvme_io_md": false, 00:11:36.771 "write_zeroes": true, 00:11:36.771 "zcopy": true, 00:11:36.771 "get_zone_info": false, 00:11:36.771 "zone_management": false, 00:11:36.771 "zone_append": false, 00:11:36.771 "compare": false, 00:11:36.771 "compare_and_write": false, 00:11:36.771 "abort": true, 00:11:36.771 "seek_hole": false, 00:11:36.771 "seek_data": false, 00:11:36.771 "copy": true, 00:11:36.771 "nvme_iov_md": false 00:11:36.771 }, 00:11:36.771 "memory_domains": [ 00:11:36.771 { 00:11:36.771 "dma_device_id": "system", 00:11:36.771 "dma_device_type": 1 00:11:36.771 }, 00:11:36.771 { 00:11:36.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.771 "dma_device_type": 2 00:11:36.771 } 00:11:36.771 ], 00:11:36.771 "driver_specific": {} 00:11:36.771 } 00:11:36.771 ] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.771 BaseBdev4 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.771 [ 00:11:36.771 { 00:11:36.771 "name": "BaseBdev4", 00:11:36.771 "aliases": [ 00:11:36.771 "e17736fd-1a91-4e36-b0fb-435b50974f01" 00:11:36.771 ], 00:11:36.771 "product_name": "Malloc disk", 00:11:36.771 "block_size": 512, 00:11:36.771 "num_blocks": 65536, 00:11:36.771 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:36.771 "assigned_rate_limits": { 00:11:36.771 "rw_ios_per_sec": 0, 00:11:36.771 "rw_mbytes_per_sec": 0, 00:11:36.771 "r_mbytes_per_sec": 0, 00:11:36.771 "w_mbytes_per_sec": 0 00:11:36.771 }, 00:11:36.771 "claimed": false, 00:11:36.771 "zoned": false, 00:11:36.771 "supported_io_types": { 00:11:36.771 "read": true, 00:11:36.771 "write": true, 00:11:36.771 "unmap": true, 00:11:36.771 "flush": true, 00:11:36.771 "reset": true, 00:11:36.771 "nvme_admin": false, 00:11:36.771 "nvme_io": false, 00:11:36.771 "nvme_io_md": false, 00:11:36.771 "write_zeroes": true, 00:11:36.771 "zcopy": true, 00:11:36.771 "get_zone_info": false, 00:11:36.771 "zone_management": false, 00:11:36.771 "zone_append": false, 00:11:36.771 "compare": false, 00:11:36.771 "compare_and_write": false, 00:11:36.771 "abort": true, 00:11:36.771 "seek_hole": false, 00:11:36.771 "seek_data": false, 00:11:36.771 "copy": true, 00:11:36.771 "nvme_iov_md": false 00:11:36.771 }, 00:11:36.771 "memory_domains": [ 00:11:36.771 { 00:11:36.771 "dma_device_id": "system", 00:11:36.771 "dma_device_type": 1 00:11:36.771 }, 00:11:36.771 { 00:11:36.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.771 "dma_device_type": 2 00:11:36.771 } 00:11:36.771 ], 00:11:36.771 "driver_specific": {} 00:11:36.771 } 00:11:36.771 ] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.771 [2024-11-05 03:22:50.320565] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.771 [2024-11-05 03:22:50.320756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.771 [2024-11-05 03:22:50.320893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.771 [2024-11-05 03:22:50.323336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.771 [2024-11-05 03:22:50.323410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.771 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.771 "name": "Existed_Raid", 00:11:36.771 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:36.771 "strip_size_kb": 64, 00:11:36.771 "state": "configuring", 00:11:36.771 "raid_level": "raid0", 00:11:36.771 "superblock": true, 00:11:36.771 "num_base_bdevs": 4, 00:11:36.771 "num_base_bdevs_discovered": 3, 00:11:36.771 "num_base_bdevs_operational": 4, 00:11:36.771 "base_bdevs_list": [ 00:11:36.771 { 00:11:36.772 "name": "BaseBdev1", 00:11:36.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.772 "is_configured": false, 00:11:36.772 "data_offset": 0, 00:11:36.772 "data_size": 0 00:11:36.772 }, 00:11:36.772 { 00:11:36.772 "name": "BaseBdev2", 00:11:36.772 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:36.772 "is_configured": true, 00:11:36.772 "data_offset": 2048, 00:11:36.772 "data_size": 63488 00:11:36.772 }, 00:11:36.772 { 00:11:36.772 "name": "BaseBdev3", 00:11:36.772 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:36.772 "is_configured": true, 00:11:36.772 "data_offset": 2048, 00:11:36.772 "data_size": 63488 00:11:36.772 }, 00:11:36.772 { 00:11:36.772 "name": "BaseBdev4", 00:11:36.772 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:36.772 "is_configured": true, 00:11:36.772 "data_offset": 2048, 00:11:36.772 "data_size": 63488 00:11:36.772 } 00:11:36.772 ] 00:11:36.772 }' 00:11:36.772 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.772 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.338 [2024-11-05 03:22:50.860691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.338 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.338 "name": "Existed_Raid", 00:11:37.338 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:37.338 "strip_size_kb": 64, 00:11:37.338 "state": "configuring", 00:11:37.338 "raid_level": "raid0", 00:11:37.338 "superblock": true, 00:11:37.338 "num_base_bdevs": 4, 00:11:37.338 "num_base_bdevs_discovered": 2, 00:11:37.338 "num_base_bdevs_operational": 4, 00:11:37.338 "base_bdevs_list": [ 00:11:37.338 { 00:11:37.338 "name": "BaseBdev1", 00:11:37.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.338 "is_configured": false, 00:11:37.338 "data_offset": 0, 00:11:37.338 "data_size": 0 00:11:37.338 }, 00:11:37.338 { 00:11:37.338 "name": null, 00:11:37.338 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:37.338 "is_configured": false, 00:11:37.338 "data_offset": 0, 00:11:37.338 "data_size": 63488 00:11:37.338 }, 00:11:37.338 { 00:11:37.339 "name": "BaseBdev3", 00:11:37.339 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:37.339 "is_configured": true, 00:11:37.339 "data_offset": 2048, 00:11:37.339 "data_size": 63488 00:11:37.339 }, 00:11:37.339 { 00:11:37.339 "name": "BaseBdev4", 00:11:37.339 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:37.339 "is_configured": true, 00:11:37.339 "data_offset": 2048, 00:11:37.339 "data_size": 63488 00:11:37.339 } 00:11:37.339 ] 00:11:37.339 }' 00:11:37.339 03:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.339 03:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.905 [2024-11-05 03:22:51.474072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.905 BaseBdev1 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.905 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.905 [ 00:11:37.905 { 00:11:37.905 "name": "BaseBdev1", 00:11:37.905 "aliases": [ 00:11:37.905 "37eabf6e-4485-49bc-a2c8-54f054b7d0b9" 00:11:37.905 ], 00:11:37.905 "product_name": "Malloc disk", 00:11:37.905 "block_size": 512, 00:11:37.905 "num_blocks": 65536, 00:11:37.905 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:37.905 "assigned_rate_limits": { 00:11:37.905 "rw_ios_per_sec": 0, 00:11:37.905 "rw_mbytes_per_sec": 0, 00:11:37.905 "r_mbytes_per_sec": 0, 00:11:37.905 "w_mbytes_per_sec": 0 00:11:37.905 }, 00:11:37.905 "claimed": true, 00:11:37.905 "claim_type": "exclusive_write", 00:11:37.905 "zoned": false, 00:11:37.905 "supported_io_types": { 00:11:37.905 "read": true, 00:11:37.905 "write": true, 00:11:37.905 "unmap": true, 00:11:37.905 "flush": true, 00:11:37.905 "reset": true, 00:11:37.905 "nvme_admin": false, 00:11:37.905 "nvme_io": false, 00:11:37.905 "nvme_io_md": false, 00:11:37.905 "write_zeroes": true, 00:11:37.905 "zcopy": true, 00:11:37.905 "get_zone_info": false, 00:11:37.905 "zone_management": false, 00:11:37.905 "zone_append": false, 00:11:37.905 "compare": false, 00:11:37.905 "compare_and_write": false, 00:11:37.905 "abort": true, 00:11:37.905 "seek_hole": false, 00:11:37.905 "seek_data": false, 00:11:37.905 "copy": true, 00:11:37.905 "nvme_iov_md": false 00:11:37.905 }, 00:11:37.905 "memory_domains": [ 00:11:37.905 { 00:11:37.905 "dma_device_id": "system", 00:11:37.905 "dma_device_type": 1 00:11:37.905 }, 00:11:37.905 { 00:11:37.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.905 "dma_device_type": 2 00:11:37.905 } 00:11:37.905 ], 00:11:37.905 "driver_specific": {} 00:11:37.905 } 00:11:37.906 ] 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.906 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.164 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.164 "name": "Existed_Raid", 00:11:38.164 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:38.164 "strip_size_kb": 64, 00:11:38.164 "state": "configuring", 00:11:38.164 "raid_level": "raid0", 00:11:38.164 "superblock": true, 00:11:38.164 "num_base_bdevs": 4, 00:11:38.164 "num_base_bdevs_discovered": 3, 00:11:38.164 "num_base_bdevs_operational": 4, 00:11:38.164 "base_bdevs_list": [ 00:11:38.164 { 00:11:38.164 "name": "BaseBdev1", 00:11:38.164 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:38.164 "is_configured": true, 00:11:38.164 "data_offset": 2048, 00:11:38.164 "data_size": 63488 00:11:38.164 }, 00:11:38.164 { 00:11:38.164 "name": null, 00:11:38.164 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:38.164 "is_configured": false, 00:11:38.164 "data_offset": 0, 00:11:38.164 "data_size": 63488 00:11:38.164 }, 00:11:38.164 { 00:11:38.164 "name": "BaseBdev3", 00:11:38.164 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:38.164 "is_configured": true, 00:11:38.164 "data_offset": 2048, 00:11:38.164 "data_size": 63488 00:11:38.164 }, 00:11:38.164 { 00:11:38.164 "name": "BaseBdev4", 00:11:38.164 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:38.164 "is_configured": true, 00:11:38.164 "data_offset": 2048, 00:11:38.164 "data_size": 63488 00:11:38.164 } 00:11:38.164 ] 00:11:38.164 }' 00:11:38.164 03:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.164 03:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.421 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.422 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.679 [2024-11-05 03:22:52.062327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.679 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.680 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.680 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.680 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.680 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.680 "name": "Existed_Raid", 00:11:38.680 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:38.680 "strip_size_kb": 64, 00:11:38.680 "state": "configuring", 00:11:38.680 "raid_level": "raid0", 00:11:38.680 "superblock": true, 00:11:38.680 "num_base_bdevs": 4, 00:11:38.680 "num_base_bdevs_discovered": 2, 00:11:38.680 "num_base_bdevs_operational": 4, 00:11:38.680 "base_bdevs_list": [ 00:11:38.680 { 00:11:38.680 "name": "BaseBdev1", 00:11:38.680 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:38.680 "is_configured": true, 00:11:38.680 "data_offset": 2048, 00:11:38.680 "data_size": 63488 00:11:38.680 }, 00:11:38.680 { 00:11:38.680 "name": null, 00:11:38.680 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:38.680 "is_configured": false, 00:11:38.680 "data_offset": 0, 00:11:38.680 "data_size": 63488 00:11:38.680 }, 00:11:38.680 { 00:11:38.680 "name": null, 00:11:38.680 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:38.680 "is_configured": false, 00:11:38.680 "data_offset": 0, 00:11:38.680 "data_size": 63488 00:11:38.680 }, 00:11:38.680 { 00:11:38.680 "name": "BaseBdev4", 00:11:38.680 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:38.680 "is_configured": true, 00:11:38.680 "data_offset": 2048, 00:11:38.680 "data_size": 63488 00:11:38.680 } 00:11:38.680 ] 00:11:38.680 }' 00:11:38.680 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.680 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.246 [2024-11-05 03:22:52.630472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.246 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.247 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.247 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.247 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.247 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.247 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.247 "name": "Existed_Raid", 00:11:39.247 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:39.247 "strip_size_kb": 64, 00:11:39.247 "state": "configuring", 00:11:39.247 "raid_level": "raid0", 00:11:39.247 "superblock": true, 00:11:39.247 "num_base_bdevs": 4, 00:11:39.247 "num_base_bdevs_discovered": 3, 00:11:39.247 "num_base_bdevs_operational": 4, 00:11:39.247 "base_bdevs_list": [ 00:11:39.247 { 00:11:39.247 "name": "BaseBdev1", 00:11:39.247 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:39.247 "is_configured": true, 00:11:39.247 "data_offset": 2048, 00:11:39.247 "data_size": 63488 00:11:39.247 }, 00:11:39.247 { 00:11:39.247 "name": null, 00:11:39.247 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:39.247 "is_configured": false, 00:11:39.247 "data_offset": 0, 00:11:39.247 "data_size": 63488 00:11:39.247 }, 00:11:39.247 { 00:11:39.247 "name": "BaseBdev3", 00:11:39.247 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:39.247 "is_configured": true, 00:11:39.247 "data_offset": 2048, 00:11:39.247 "data_size": 63488 00:11:39.247 }, 00:11:39.247 { 00:11:39.247 "name": "BaseBdev4", 00:11:39.247 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:39.247 "is_configured": true, 00:11:39.247 "data_offset": 2048, 00:11:39.247 "data_size": 63488 00:11:39.247 } 00:11:39.247 ] 00:11:39.247 }' 00:11:39.247 03:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.247 03:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.814 [2024-11-05 03:22:53.206643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.814 "name": "Existed_Raid", 00:11:39.814 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:39.814 "strip_size_kb": 64, 00:11:39.814 "state": "configuring", 00:11:39.814 "raid_level": "raid0", 00:11:39.814 "superblock": true, 00:11:39.814 "num_base_bdevs": 4, 00:11:39.814 "num_base_bdevs_discovered": 2, 00:11:39.814 "num_base_bdevs_operational": 4, 00:11:39.814 "base_bdevs_list": [ 00:11:39.814 { 00:11:39.814 "name": null, 00:11:39.814 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:39.814 "is_configured": false, 00:11:39.814 "data_offset": 0, 00:11:39.814 "data_size": 63488 00:11:39.814 }, 00:11:39.814 { 00:11:39.814 "name": null, 00:11:39.814 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:39.814 "is_configured": false, 00:11:39.814 "data_offset": 0, 00:11:39.814 "data_size": 63488 00:11:39.814 }, 00:11:39.814 { 00:11:39.814 "name": "BaseBdev3", 00:11:39.814 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:39.814 "is_configured": true, 00:11:39.814 "data_offset": 2048, 00:11:39.814 "data_size": 63488 00:11:39.814 }, 00:11:39.814 { 00:11:39.814 "name": "BaseBdev4", 00:11:39.814 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:39.814 "is_configured": true, 00:11:39.814 "data_offset": 2048, 00:11:39.814 "data_size": 63488 00:11:39.814 } 00:11:39.814 ] 00:11:39.814 }' 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.814 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.381 [2024-11-05 03:22:53.863124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.381 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.381 "name": "Existed_Raid", 00:11:40.381 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:40.381 "strip_size_kb": 64, 00:11:40.381 "state": "configuring", 00:11:40.381 "raid_level": "raid0", 00:11:40.381 "superblock": true, 00:11:40.381 "num_base_bdevs": 4, 00:11:40.381 "num_base_bdevs_discovered": 3, 00:11:40.381 "num_base_bdevs_operational": 4, 00:11:40.381 "base_bdevs_list": [ 00:11:40.381 { 00:11:40.381 "name": null, 00:11:40.381 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:40.381 "is_configured": false, 00:11:40.381 "data_offset": 0, 00:11:40.381 "data_size": 63488 00:11:40.381 }, 00:11:40.382 { 00:11:40.382 "name": "BaseBdev2", 00:11:40.382 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:40.382 "is_configured": true, 00:11:40.382 "data_offset": 2048, 00:11:40.382 "data_size": 63488 00:11:40.382 }, 00:11:40.382 { 00:11:40.382 "name": "BaseBdev3", 00:11:40.382 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:40.382 "is_configured": true, 00:11:40.382 "data_offset": 2048, 00:11:40.382 "data_size": 63488 00:11:40.382 }, 00:11:40.382 { 00:11:40.382 "name": "BaseBdev4", 00:11:40.382 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:40.382 "is_configured": true, 00:11:40.382 "data_offset": 2048, 00:11:40.382 "data_size": 63488 00:11:40.382 } 00:11:40.382 ] 00:11:40.382 }' 00:11:40.382 03:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.382 03:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.948 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.948 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.948 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.948 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.948 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.948 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:40.948 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 37eabf6e-4485-49bc-a2c8-54f054b7d0b9 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.949 [2024-11-05 03:22:54.528513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:40.949 NewBaseBdev 00:11:40.949 [2024-11-05 03:22:54.529049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.949 [2024-11-05 03:22:54.529073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:40.949 [2024-11-05 03:22:54.529423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:40.949 [2024-11-05 03:22:54.529605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.949 [2024-11-05 03:22:54.529646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.949 [2024-11-05 03:22:54.529800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.949 [ 00:11:40.949 { 00:11:40.949 "name": "NewBaseBdev", 00:11:40.949 "aliases": [ 00:11:40.949 "37eabf6e-4485-49bc-a2c8-54f054b7d0b9" 00:11:40.949 ], 00:11:40.949 "product_name": "Malloc disk", 00:11:40.949 "block_size": 512, 00:11:40.949 "num_blocks": 65536, 00:11:40.949 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:40.949 "assigned_rate_limits": { 00:11:40.949 "rw_ios_per_sec": 0, 00:11:40.949 "rw_mbytes_per_sec": 0, 00:11:40.949 "r_mbytes_per_sec": 0, 00:11:40.949 "w_mbytes_per_sec": 0 00:11:40.949 }, 00:11:40.949 "claimed": true, 00:11:40.949 "claim_type": "exclusive_write", 00:11:40.949 "zoned": false, 00:11:40.949 "supported_io_types": { 00:11:40.949 "read": true, 00:11:40.949 "write": true, 00:11:40.949 "unmap": true, 00:11:40.949 "flush": true, 00:11:40.949 "reset": true, 00:11:40.949 "nvme_admin": false, 00:11:40.949 "nvme_io": false, 00:11:40.949 "nvme_io_md": false, 00:11:40.949 "write_zeroes": true, 00:11:40.949 "zcopy": true, 00:11:40.949 "get_zone_info": false, 00:11:40.949 "zone_management": false, 00:11:40.949 "zone_append": false, 00:11:40.949 "compare": false, 00:11:40.949 "compare_and_write": false, 00:11:40.949 "abort": true, 00:11:40.949 "seek_hole": false, 00:11:40.949 "seek_data": false, 00:11:40.949 "copy": true, 00:11:40.949 "nvme_iov_md": false 00:11:40.949 }, 00:11:40.949 "memory_domains": [ 00:11:40.949 { 00:11:40.949 "dma_device_id": "system", 00:11:40.949 "dma_device_type": 1 00:11:40.949 }, 00:11:40.949 { 00:11:40.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.949 "dma_device_type": 2 00:11:40.949 } 00:11:40.949 ], 00:11:40.949 "driver_specific": {} 00:11:40.949 } 00:11:40.949 ] 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.949 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.207 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.207 "name": "Existed_Raid", 00:11:41.207 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:41.207 "strip_size_kb": 64, 00:11:41.207 "state": "online", 00:11:41.207 "raid_level": "raid0", 00:11:41.207 "superblock": true, 00:11:41.207 "num_base_bdevs": 4, 00:11:41.207 "num_base_bdevs_discovered": 4, 00:11:41.207 "num_base_bdevs_operational": 4, 00:11:41.207 "base_bdevs_list": [ 00:11:41.207 { 00:11:41.207 "name": "NewBaseBdev", 00:11:41.207 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 2048, 00:11:41.207 "data_size": 63488 00:11:41.207 }, 00:11:41.207 { 00:11:41.207 "name": "BaseBdev2", 00:11:41.207 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 2048, 00:11:41.207 "data_size": 63488 00:11:41.207 }, 00:11:41.207 { 00:11:41.207 "name": "BaseBdev3", 00:11:41.207 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 2048, 00:11:41.207 "data_size": 63488 00:11:41.207 }, 00:11:41.207 { 00:11:41.207 "name": "BaseBdev4", 00:11:41.207 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:41.207 "is_configured": true, 00:11:41.207 "data_offset": 2048, 00:11:41.207 "data_size": 63488 00:11:41.207 } 00:11:41.207 ] 00:11:41.207 }' 00:11:41.207 03:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.207 03:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.466 [2024-11-05 03:22:55.069161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.466 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.725 "name": "Existed_Raid", 00:11:41.725 "aliases": [ 00:11:41.725 "234c6322-ce49-4504-abd2-3e322dcd11e3" 00:11:41.725 ], 00:11:41.725 "product_name": "Raid Volume", 00:11:41.725 "block_size": 512, 00:11:41.725 "num_blocks": 253952, 00:11:41.725 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:41.725 "assigned_rate_limits": { 00:11:41.725 "rw_ios_per_sec": 0, 00:11:41.725 "rw_mbytes_per_sec": 0, 00:11:41.725 "r_mbytes_per_sec": 0, 00:11:41.725 "w_mbytes_per_sec": 0 00:11:41.725 }, 00:11:41.725 "claimed": false, 00:11:41.725 "zoned": false, 00:11:41.725 "supported_io_types": { 00:11:41.725 "read": true, 00:11:41.725 "write": true, 00:11:41.725 "unmap": true, 00:11:41.725 "flush": true, 00:11:41.725 "reset": true, 00:11:41.725 "nvme_admin": false, 00:11:41.725 "nvme_io": false, 00:11:41.725 "nvme_io_md": false, 00:11:41.725 "write_zeroes": true, 00:11:41.725 "zcopy": false, 00:11:41.725 "get_zone_info": false, 00:11:41.725 "zone_management": false, 00:11:41.725 "zone_append": false, 00:11:41.725 "compare": false, 00:11:41.725 "compare_and_write": false, 00:11:41.725 "abort": false, 00:11:41.725 "seek_hole": false, 00:11:41.725 "seek_data": false, 00:11:41.725 "copy": false, 00:11:41.725 "nvme_iov_md": false 00:11:41.725 }, 00:11:41.725 "memory_domains": [ 00:11:41.725 { 00:11:41.725 "dma_device_id": "system", 00:11:41.725 "dma_device_type": 1 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.725 "dma_device_type": 2 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "dma_device_id": "system", 00:11:41.725 "dma_device_type": 1 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.725 "dma_device_type": 2 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "dma_device_id": "system", 00:11:41.725 "dma_device_type": 1 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.725 "dma_device_type": 2 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "dma_device_id": "system", 00:11:41.725 "dma_device_type": 1 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.725 "dma_device_type": 2 00:11:41.725 } 00:11:41.725 ], 00:11:41.725 "driver_specific": { 00:11:41.725 "raid": { 00:11:41.725 "uuid": "234c6322-ce49-4504-abd2-3e322dcd11e3", 00:11:41.725 "strip_size_kb": 64, 00:11:41.725 "state": "online", 00:11:41.725 "raid_level": "raid0", 00:11:41.725 "superblock": true, 00:11:41.725 "num_base_bdevs": 4, 00:11:41.725 "num_base_bdevs_discovered": 4, 00:11:41.725 "num_base_bdevs_operational": 4, 00:11:41.725 "base_bdevs_list": [ 00:11:41.725 { 00:11:41.725 "name": "NewBaseBdev", 00:11:41.725 "uuid": "37eabf6e-4485-49bc-a2c8-54f054b7d0b9", 00:11:41.725 "is_configured": true, 00:11:41.725 "data_offset": 2048, 00:11:41.725 "data_size": 63488 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "name": "BaseBdev2", 00:11:41.725 "uuid": "13d28ddc-350f-4927-a96d-4ffcb9ad19a2", 00:11:41.725 "is_configured": true, 00:11:41.725 "data_offset": 2048, 00:11:41.725 "data_size": 63488 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "name": "BaseBdev3", 00:11:41.725 "uuid": "b648affe-9207-490a-b67a-c2ca696dae26", 00:11:41.725 "is_configured": true, 00:11:41.725 "data_offset": 2048, 00:11:41.725 "data_size": 63488 00:11:41.725 }, 00:11:41.725 { 00:11:41.725 "name": "BaseBdev4", 00:11:41.725 "uuid": "e17736fd-1a91-4e36-b0fb-435b50974f01", 00:11:41.725 "is_configured": true, 00:11:41.725 "data_offset": 2048, 00:11:41.725 "data_size": 63488 00:11:41.725 } 00:11:41.725 ] 00:11:41.725 } 00:11:41.725 } 00:11:41.725 }' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.725 BaseBdev2 00:11:41.725 BaseBdev3 00:11:41.725 BaseBdev4' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.725 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.984 [2024-11-05 03:22:55.432825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.984 [2024-11-05 03:22:55.432862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.984 [2024-11-05 03:22:55.432956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.984 [2024-11-05 03:22:55.433044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.984 [2024-11-05 03:22:55.433061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69905 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 69905 ']' 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 69905 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69905 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:41.984 killing process with pid 69905 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69905' 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 69905 00:11:41.984 [2024-11-05 03:22:55.471165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.984 03:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 69905 00:11:42.242 [2024-11-05 03:22:55.821440] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.620 03:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.620 00:11:43.620 real 0m12.791s 00:11:43.620 user 0m21.330s 00:11:43.620 sys 0m1.766s 00:11:43.620 03:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.620 ************************************ 00:11:43.620 END TEST raid_state_function_test_sb 00:11:43.620 ************************************ 00:11:43.620 03:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.620 03:22:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:43.620 03:22:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:43.620 03:22:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.620 03:22:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.620 ************************************ 00:11:43.620 START TEST raid_superblock_test 00:11:43.620 ************************************ 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70585 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70585 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70585 ']' 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:43.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:43.620 03:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.620 [2024-11-05 03:22:56.997835] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:43.620 [2024-11-05 03:22:56.998012] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:11:43.620 [2024-11-05 03:22:57.188767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.879 [2024-11-05 03:22:57.363454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.137 [2024-11-05 03:22:57.590965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.137 [2024-11-05 03:22:57.591312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.395 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.396 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.396 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:44.396 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.396 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 malloc1 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 [2024-11-05 03:22:58.062142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.654 [2024-11-05 03:22:58.062214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.654 [2024-11-05 03:22:58.062246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:44.654 [2024-11-05 03:22:58.062261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.654 [2024-11-05 03:22:58.065000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.654 [2024-11-05 03:22:58.065193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:44.654 pt1 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 malloc2 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 [2024-11-05 03:22:58.113878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.654 [2024-11-05 03:22:58.113948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.654 [2024-11-05 03:22:58.113979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:44.654 [2024-11-05 03:22:58.113995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.654 [2024-11-05 03:22:58.116740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.654 [2024-11-05 03:22:58.116787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.654 pt2 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:44.654 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.655 malloc3 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.655 [2024-11-05 03:22:58.188169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:44.655 [2024-11-05 03:22:58.188255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.655 [2024-11-05 03:22:58.188292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:44.655 [2024-11-05 03:22:58.188336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.655 [2024-11-05 03:22:58.191709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.655 [2024-11-05 03:22:58.191768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:44.655 pt3 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.655 malloc4 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.655 [2024-11-05 03:22:58.252169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:44.655 [2024-11-05 03:22:58.252251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.655 [2024-11-05 03:22:58.252291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:44.655 [2024-11-05 03:22:58.252327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.655 [2024-11-05 03:22:58.255651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.655 [2024-11-05 03:22:58.255710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:44.655 pt4 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.655 [2024-11-05 03:22:58.260466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:44.655 [2024-11-05 03:22:58.263383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.655 [2024-11-05 03:22:58.263495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:44.655 [2024-11-05 03:22:58.263608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:44.655 [2024-11-05 03:22:58.263913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:44.655 [2024-11-05 03:22:58.263936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.655 [2024-11-05 03:22:58.264358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:44.655 [2024-11-05 03:22:58.264625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:44.655 [2024-11-05 03:22:58.264650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:44.655 [2024-11-05 03:22:58.264941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.655 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.913 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.913 "name": "raid_bdev1", 00:11:44.913 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:44.913 "strip_size_kb": 64, 00:11:44.913 "state": "online", 00:11:44.913 "raid_level": "raid0", 00:11:44.913 "superblock": true, 00:11:44.913 "num_base_bdevs": 4, 00:11:44.913 "num_base_bdevs_discovered": 4, 00:11:44.913 "num_base_bdevs_operational": 4, 00:11:44.913 "base_bdevs_list": [ 00:11:44.913 { 00:11:44.913 "name": "pt1", 00:11:44.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.913 "is_configured": true, 00:11:44.913 "data_offset": 2048, 00:11:44.913 "data_size": 63488 00:11:44.913 }, 00:11:44.913 { 00:11:44.913 "name": "pt2", 00:11:44.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.913 "is_configured": true, 00:11:44.913 "data_offset": 2048, 00:11:44.913 "data_size": 63488 00:11:44.913 }, 00:11:44.913 { 00:11:44.913 "name": "pt3", 00:11:44.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.913 "is_configured": true, 00:11:44.913 "data_offset": 2048, 00:11:44.913 "data_size": 63488 00:11:44.913 }, 00:11:44.913 { 00:11:44.913 "name": "pt4", 00:11:44.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.913 "is_configured": true, 00:11:44.913 "data_offset": 2048, 00:11:44.913 "data_size": 63488 00:11:44.913 } 00:11:44.913 ] 00:11:44.913 }' 00:11:44.913 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.913 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.172 [2024-11-05 03:22:58.757401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.172 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.172 "name": "raid_bdev1", 00:11:45.172 "aliases": [ 00:11:45.172 "3cf4c2cd-77e8-4bd2-a046-ab6134da819e" 00:11:45.172 ], 00:11:45.172 "product_name": "Raid Volume", 00:11:45.172 "block_size": 512, 00:11:45.172 "num_blocks": 253952, 00:11:45.172 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:45.172 "assigned_rate_limits": { 00:11:45.172 "rw_ios_per_sec": 0, 00:11:45.172 "rw_mbytes_per_sec": 0, 00:11:45.172 "r_mbytes_per_sec": 0, 00:11:45.172 "w_mbytes_per_sec": 0 00:11:45.172 }, 00:11:45.172 "claimed": false, 00:11:45.172 "zoned": false, 00:11:45.172 "supported_io_types": { 00:11:45.172 "read": true, 00:11:45.172 "write": true, 00:11:45.172 "unmap": true, 00:11:45.172 "flush": true, 00:11:45.172 "reset": true, 00:11:45.172 "nvme_admin": false, 00:11:45.172 "nvme_io": false, 00:11:45.172 "nvme_io_md": false, 00:11:45.172 "write_zeroes": true, 00:11:45.172 "zcopy": false, 00:11:45.172 "get_zone_info": false, 00:11:45.172 "zone_management": false, 00:11:45.172 "zone_append": false, 00:11:45.172 "compare": false, 00:11:45.172 "compare_and_write": false, 00:11:45.172 "abort": false, 00:11:45.172 "seek_hole": false, 00:11:45.172 "seek_data": false, 00:11:45.172 "copy": false, 00:11:45.172 "nvme_iov_md": false 00:11:45.172 }, 00:11:45.172 "memory_domains": [ 00:11:45.172 { 00:11:45.172 "dma_device_id": "system", 00:11:45.172 "dma_device_type": 1 00:11:45.172 }, 00:11:45.172 { 00:11:45.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.172 "dma_device_type": 2 00:11:45.172 }, 00:11:45.172 { 00:11:45.172 "dma_device_id": "system", 00:11:45.172 "dma_device_type": 1 00:11:45.172 }, 00:11:45.172 { 00:11:45.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.172 "dma_device_type": 2 00:11:45.172 }, 00:11:45.172 { 00:11:45.172 "dma_device_id": "system", 00:11:45.172 "dma_device_type": 1 00:11:45.172 }, 00:11:45.172 { 00:11:45.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.172 "dma_device_type": 2 00:11:45.172 }, 00:11:45.172 { 00:11:45.172 "dma_device_id": "system", 00:11:45.172 "dma_device_type": 1 00:11:45.172 }, 00:11:45.172 { 00:11:45.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.172 "dma_device_type": 2 00:11:45.172 } 00:11:45.172 ], 00:11:45.172 "driver_specific": { 00:11:45.172 "raid": { 00:11:45.172 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:45.172 "strip_size_kb": 64, 00:11:45.172 "state": "online", 00:11:45.172 "raid_level": "raid0", 00:11:45.172 "superblock": true, 00:11:45.172 "num_base_bdevs": 4, 00:11:45.172 "num_base_bdevs_discovered": 4, 00:11:45.172 "num_base_bdevs_operational": 4, 00:11:45.172 "base_bdevs_list": [ 00:11:45.172 { 00:11:45.172 "name": "pt1", 00:11:45.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.173 "is_configured": true, 00:11:45.173 "data_offset": 2048, 00:11:45.173 "data_size": 63488 00:11:45.173 }, 00:11:45.173 { 00:11:45.173 "name": "pt2", 00:11:45.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.173 "is_configured": true, 00:11:45.173 "data_offset": 2048, 00:11:45.173 "data_size": 63488 00:11:45.173 }, 00:11:45.173 { 00:11:45.173 "name": "pt3", 00:11:45.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.173 "is_configured": true, 00:11:45.173 "data_offset": 2048, 00:11:45.173 "data_size": 63488 00:11:45.173 }, 00:11:45.173 { 00:11:45.173 "name": "pt4", 00:11:45.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.173 "is_configured": true, 00:11:45.173 "data_offset": 2048, 00:11:45.173 "data_size": 63488 00:11:45.173 } 00:11:45.173 ] 00:11:45.173 } 00:11:45.173 } 00:11:45.173 }' 00:11:45.173 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:45.431 pt2 00:11:45.431 pt3 00:11:45.431 pt4' 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.431 03:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.431 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.431 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.432 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 [2024-11-05 03:22:59.121456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3cf4c2cd-77e8-4bd2-a046-ab6134da819e 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3cf4c2cd-77e8-4bd2-a046-ab6134da819e ']' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 [2024-11-05 03:22:59.169076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.691 [2024-11-05 03:22:59.169225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.691 [2024-11-05 03:22:59.169454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.691 [2024-11-05 03:22:59.169665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.691 [2024-11-05 03:22:59.169821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.691 [2024-11-05 03:22:59.317140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:45.691 [2024-11-05 03:22:59.319796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:45.691 [2024-11-05 03:22:59.319866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:45.691 [2024-11-05 03:22:59.319918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:45.691 [2024-11-05 03:22:59.319989] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:45.691 [2024-11-05 03:22:59.320061] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:45.691 [2024-11-05 03:22:59.320097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:45.691 [2024-11-05 03:22:59.320129] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:45.691 [2024-11-05 03:22:59.320150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.691 [2024-11-05 03:22:59.320169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:45.691 request: 00:11:45.691 { 00:11:45.691 "name": "raid_bdev1", 00:11:45.691 "raid_level": "raid0", 00:11:45.691 "base_bdevs": [ 00:11:45.691 "malloc1", 00:11:45.691 "malloc2", 00:11:45.691 "malloc3", 00:11:45.691 "malloc4" 00:11:45.691 ], 00:11:45.691 "strip_size_kb": 64, 00:11:45.691 "superblock": false, 00:11:45.691 "method": "bdev_raid_create", 00:11:45.691 "req_id": 1 00:11:45.691 } 00:11:45.691 Got JSON-RPC error response 00:11:45.691 response: 00:11:45.691 { 00:11:45.691 "code": -17, 00:11:45.691 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:45.691 } 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:45.691 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.950 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.950 [2024-11-05 03:22:59.389131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.950 [2024-11-05 03:22:59.389348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.950 [2024-11-05 03:22:59.389442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.950 [2024-11-05 03:22:59.389570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.950 [2024-11-05 03:22:59.392470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.950 [2024-11-05 03:22:59.392646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.951 [2024-11-05 03:22:59.392859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:45.951 [2024-11-05 03:22:59.393059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.951 pt1 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.951 "name": "raid_bdev1", 00:11:45.951 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:45.951 "strip_size_kb": 64, 00:11:45.951 "state": "configuring", 00:11:45.951 "raid_level": "raid0", 00:11:45.951 "superblock": true, 00:11:45.951 "num_base_bdevs": 4, 00:11:45.951 "num_base_bdevs_discovered": 1, 00:11:45.951 "num_base_bdevs_operational": 4, 00:11:45.951 "base_bdevs_list": [ 00:11:45.951 { 00:11:45.951 "name": "pt1", 00:11:45.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.951 "is_configured": true, 00:11:45.951 "data_offset": 2048, 00:11:45.951 "data_size": 63488 00:11:45.951 }, 00:11:45.951 { 00:11:45.951 "name": null, 00:11:45.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.951 "is_configured": false, 00:11:45.951 "data_offset": 2048, 00:11:45.951 "data_size": 63488 00:11:45.951 }, 00:11:45.951 { 00:11:45.951 "name": null, 00:11:45.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.951 "is_configured": false, 00:11:45.951 "data_offset": 2048, 00:11:45.951 "data_size": 63488 00:11:45.951 }, 00:11:45.951 { 00:11:45.951 "name": null, 00:11:45.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.951 "is_configured": false, 00:11:45.951 "data_offset": 2048, 00:11:45.951 "data_size": 63488 00:11:45.951 } 00:11:45.951 ] 00:11:45.951 }' 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.951 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.549 [2024-11-05 03:22:59.921579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.549 [2024-11-05 03:22:59.921687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.549 [2024-11-05 03:22:59.921719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:46.549 [2024-11-05 03:22:59.921737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.549 [2024-11-05 03:22:59.922291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.549 [2024-11-05 03:22:59.922355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.549 [2024-11-05 03:22:59.922455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.549 [2024-11-05 03:22:59.922493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.549 pt2 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.549 [2024-11-05 03:22:59.929567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.549 "name": "raid_bdev1", 00:11:46.549 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:46.549 "strip_size_kb": 64, 00:11:46.549 "state": "configuring", 00:11:46.549 "raid_level": "raid0", 00:11:46.549 "superblock": true, 00:11:46.549 "num_base_bdevs": 4, 00:11:46.549 "num_base_bdevs_discovered": 1, 00:11:46.549 "num_base_bdevs_operational": 4, 00:11:46.549 "base_bdevs_list": [ 00:11:46.549 { 00:11:46.549 "name": "pt1", 00:11:46.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.549 "is_configured": true, 00:11:46.549 "data_offset": 2048, 00:11:46.549 "data_size": 63488 00:11:46.549 }, 00:11:46.549 { 00:11:46.549 "name": null, 00:11:46.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.549 "is_configured": false, 00:11:46.549 "data_offset": 0, 00:11:46.549 "data_size": 63488 00:11:46.549 }, 00:11:46.549 { 00:11:46.549 "name": null, 00:11:46.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.549 "is_configured": false, 00:11:46.549 "data_offset": 2048, 00:11:46.549 "data_size": 63488 00:11:46.549 }, 00:11:46.549 { 00:11:46.549 "name": null, 00:11:46.549 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.549 "is_configured": false, 00:11:46.549 "data_offset": 2048, 00:11:46.549 "data_size": 63488 00:11:46.549 } 00:11:46.549 ] 00:11:46.549 }' 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.549 03:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.808 [2024-11-05 03:23:00.433765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.808 [2024-11-05 03:23:00.433849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.808 [2024-11-05 03:23:00.433892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:46.808 [2024-11-05 03:23:00.433908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.808 [2024-11-05 03:23:00.434541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.808 [2024-11-05 03:23:00.434568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.808 [2024-11-05 03:23:00.434708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.808 [2024-11-05 03:23:00.434740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.808 pt2 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.808 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.808 [2024-11-05 03:23:00.441719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.808 [2024-11-05 03:23:00.441917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.808 [2024-11-05 03:23:00.441963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:46.808 [2024-11-05 03:23:00.441981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.808 [2024-11-05 03:23:00.442484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.808 [2024-11-05 03:23:00.442520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.808 [2024-11-05 03:23:00.442608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:46.808 [2024-11-05 03:23:00.442638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.067 pt3 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.067 [2024-11-05 03:23:00.449708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.067 [2024-11-05 03:23:00.449770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.067 [2024-11-05 03:23:00.449800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:47.067 [2024-11-05 03:23:00.449814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.067 [2024-11-05 03:23:00.450314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.067 [2024-11-05 03:23:00.450378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.067 [2024-11-05 03:23:00.450463] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.067 [2024-11-05 03:23:00.450492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.067 [2024-11-05 03:23:00.450657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.067 [2024-11-05 03:23:00.450672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.067 [2024-11-05 03:23:00.450989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:47.067 [2024-11-05 03:23:00.451197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.067 [2024-11-05 03:23:00.451219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:47.067 [2024-11-05 03:23:00.451435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.067 pt4 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.067 "name": "raid_bdev1", 00:11:47.067 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:47.067 "strip_size_kb": 64, 00:11:47.067 "state": "online", 00:11:47.067 "raid_level": "raid0", 00:11:47.067 "superblock": true, 00:11:47.067 "num_base_bdevs": 4, 00:11:47.067 "num_base_bdevs_discovered": 4, 00:11:47.067 "num_base_bdevs_operational": 4, 00:11:47.067 "base_bdevs_list": [ 00:11:47.067 { 00:11:47.067 "name": "pt1", 00:11:47.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.067 "is_configured": true, 00:11:47.067 "data_offset": 2048, 00:11:47.067 "data_size": 63488 00:11:47.067 }, 00:11:47.067 { 00:11:47.067 "name": "pt2", 00:11:47.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.067 "is_configured": true, 00:11:47.067 "data_offset": 2048, 00:11:47.067 "data_size": 63488 00:11:47.067 }, 00:11:47.067 { 00:11:47.067 "name": "pt3", 00:11:47.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.067 "is_configured": true, 00:11:47.067 "data_offset": 2048, 00:11:47.067 "data_size": 63488 00:11:47.067 }, 00:11:47.067 { 00:11:47.067 "name": "pt4", 00:11:47.067 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.067 "is_configured": true, 00:11:47.067 "data_offset": 2048, 00:11:47.067 "data_size": 63488 00:11:47.067 } 00:11:47.067 ] 00:11:47.067 }' 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.067 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.634 03:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.635 03:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.635 [2024-11-05 03:23:00.986350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.635 "name": "raid_bdev1", 00:11:47.635 "aliases": [ 00:11:47.635 "3cf4c2cd-77e8-4bd2-a046-ab6134da819e" 00:11:47.635 ], 00:11:47.635 "product_name": "Raid Volume", 00:11:47.635 "block_size": 512, 00:11:47.635 "num_blocks": 253952, 00:11:47.635 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:47.635 "assigned_rate_limits": { 00:11:47.635 "rw_ios_per_sec": 0, 00:11:47.635 "rw_mbytes_per_sec": 0, 00:11:47.635 "r_mbytes_per_sec": 0, 00:11:47.635 "w_mbytes_per_sec": 0 00:11:47.635 }, 00:11:47.635 "claimed": false, 00:11:47.635 "zoned": false, 00:11:47.635 "supported_io_types": { 00:11:47.635 "read": true, 00:11:47.635 "write": true, 00:11:47.635 "unmap": true, 00:11:47.635 "flush": true, 00:11:47.635 "reset": true, 00:11:47.635 "nvme_admin": false, 00:11:47.635 "nvme_io": false, 00:11:47.635 "nvme_io_md": false, 00:11:47.635 "write_zeroes": true, 00:11:47.635 "zcopy": false, 00:11:47.635 "get_zone_info": false, 00:11:47.635 "zone_management": false, 00:11:47.635 "zone_append": false, 00:11:47.635 "compare": false, 00:11:47.635 "compare_and_write": false, 00:11:47.635 "abort": false, 00:11:47.635 "seek_hole": false, 00:11:47.635 "seek_data": false, 00:11:47.635 "copy": false, 00:11:47.635 "nvme_iov_md": false 00:11:47.635 }, 00:11:47.635 "memory_domains": [ 00:11:47.635 { 00:11:47.635 "dma_device_id": "system", 00:11:47.635 "dma_device_type": 1 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.635 "dma_device_type": 2 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "dma_device_id": "system", 00:11:47.635 "dma_device_type": 1 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.635 "dma_device_type": 2 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "dma_device_id": "system", 00:11:47.635 "dma_device_type": 1 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.635 "dma_device_type": 2 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "dma_device_id": "system", 00:11:47.635 "dma_device_type": 1 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.635 "dma_device_type": 2 00:11:47.635 } 00:11:47.635 ], 00:11:47.635 "driver_specific": { 00:11:47.635 "raid": { 00:11:47.635 "uuid": "3cf4c2cd-77e8-4bd2-a046-ab6134da819e", 00:11:47.635 "strip_size_kb": 64, 00:11:47.635 "state": "online", 00:11:47.635 "raid_level": "raid0", 00:11:47.635 "superblock": true, 00:11:47.635 "num_base_bdevs": 4, 00:11:47.635 "num_base_bdevs_discovered": 4, 00:11:47.635 "num_base_bdevs_operational": 4, 00:11:47.635 "base_bdevs_list": [ 00:11:47.635 { 00:11:47.635 "name": "pt1", 00:11:47.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.635 "is_configured": true, 00:11:47.635 "data_offset": 2048, 00:11:47.635 "data_size": 63488 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "name": "pt2", 00:11:47.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.635 "is_configured": true, 00:11:47.635 "data_offset": 2048, 00:11:47.635 "data_size": 63488 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "name": "pt3", 00:11:47.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.635 "is_configured": true, 00:11:47.635 "data_offset": 2048, 00:11:47.635 "data_size": 63488 00:11:47.635 }, 00:11:47.635 { 00:11:47.635 "name": "pt4", 00:11:47.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.635 "is_configured": true, 00:11:47.635 "data_offset": 2048, 00:11:47.635 "data_size": 63488 00:11:47.635 } 00:11:47.635 ] 00:11:47.635 } 00:11:47.635 } 00:11:47.635 }' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.635 pt2 00:11:47.635 pt3 00:11:47.635 pt4' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.635 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.894 [2024-11-05 03:23:01.362413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3cf4c2cd-77e8-4bd2-a046-ab6134da819e '!=' 3cf4c2cd-77e8-4bd2-a046-ab6134da819e ']' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70585 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70585 ']' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70585 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70585 00:11:47.894 killing process with pid 70585 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70585' 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70585 00:11:47.894 [2024-11-05 03:23:01.444540] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.894 03:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70585 00:11:47.894 [2024-11-05 03:23:01.444650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.894 [2024-11-05 03:23:01.444770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.894 [2024-11-05 03:23:01.444785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:48.153 [2024-11-05 03:23:01.772599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.532 ************************************ 00:11:49.532 END TEST raid_superblock_test 00:11:49.532 ************************************ 00:11:49.532 03:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:49.532 00:11:49.532 real 0m5.885s 00:11:49.532 user 0m8.851s 00:11:49.533 sys 0m0.892s 00:11:49.533 03:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.533 03:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.533 03:23:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:49.533 03:23:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:49.533 03:23:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.533 03:23:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.533 ************************************ 00:11:49.533 START TEST raid_read_error_test 00:11:49.533 ************************************ 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:49.533 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mASxGnqBlI 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70851 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70851 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 70851 ']' 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.534 03:23:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.534 [2024-11-05 03:23:02.955073] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:49.534 [2024-11-05 03:23:02.955273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70851 ] 00:11:49.534 [2024-11-05 03:23:03.139195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.794 [2024-11-05 03:23:03.267202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.053 [2024-11-05 03:23:03.461659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.053 [2024-11-05 03:23:03.461933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.312 BaseBdev1_malloc 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.312 true 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.312 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.312 [2024-11-05 03:23:03.945773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.312 [2024-11-05 03:23:03.946053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.312 [2024-11-05 03:23:03.946095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:50.312 [2024-11-05 03:23:03.946115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.570 [2024-11-05 03:23:03.948911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.570 [2024-11-05 03:23:03.948960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.570 BaseBdev1 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 BaseBdev2_malloc 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 true 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 03:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 [2024-11-05 03:23:04.001476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:50.570 [2024-11-05 03:23:04.001700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.570 [2024-11-05 03:23:04.001738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:50.570 [2024-11-05 03:23:04.001757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.570 [2024-11-05 03:23:04.004518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.570 [2024-11-05 03:23:04.004570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.570 BaseBdev2 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 BaseBdev3_malloc 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 true 00:11:50.570 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.571 [2024-11-05 03:23:04.075338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:50.571 [2024-11-05 03:23:04.075405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.571 [2024-11-05 03:23:04.075433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:50.571 [2024-11-05 03:23:04.075451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.571 [2024-11-05 03:23:04.078225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.571 [2024-11-05 03:23:04.078277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:50.571 BaseBdev3 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.571 BaseBdev4_malloc 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.571 true 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.571 [2024-11-05 03:23:04.131432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:50.571 [2024-11-05 03:23:04.131506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.571 [2024-11-05 03:23:04.131536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.571 [2024-11-05 03:23:04.131553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.571 [2024-11-05 03:23:04.134283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.571 [2024-11-05 03:23:04.134356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:50.571 BaseBdev4 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.571 [2024-11-05 03:23:04.139512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.571 [2024-11-05 03:23:04.141929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.571 [2024-11-05 03:23:04.142037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.571 [2024-11-05 03:23:04.142142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.571 [2024-11-05 03:23:04.142459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:50.571 [2024-11-05 03:23:04.142486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.571 [2024-11-05 03:23:04.142798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:50.571 [2024-11-05 03:23:04.143019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:50.571 [2024-11-05 03:23:04.143037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:50.571 [2024-11-05 03:23:04.143227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.571 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.830 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.830 "name": "raid_bdev1", 00:11:50.830 "uuid": "6a45dda0-cf8f-4767-aa68-c665ac920348", 00:11:50.830 "strip_size_kb": 64, 00:11:50.830 "state": "online", 00:11:50.830 "raid_level": "raid0", 00:11:50.830 "superblock": true, 00:11:50.830 "num_base_bdevs": 4, 00:11:50.830 "num_base_bdevs_discovered": 4, 00:11:50.830 "num_base_bdevs_operational": 4, 00:11:50.830 "base_bdevs_list": [ 00:11:50.830 { 00:11:50.830 "name": "BaseBdev1", 00:11:50.830 "uuid": "5cc2f2ed-3644-5bfd-9ae9-0a8086fcd020", 00:11:50.830 "is_configured": true, 00:11:50.830 "data_offset": 2048, 00:11:50.830 "data_size": 63488 00:11:50.830 }, 00:11:50.830 { 00:11:50.830 "name": "BaseBdev2", 00:11:50.830 "uuid": "4b2357fb-0177-58b1-808e-2b0f906351ef", 00:11:50.830 "is_configured": true, 00:11:50.830 "data_offset": 2048, 00:11:50.830 "data_size": 63488 00:11:50.830 }, 00:11:50.830 { 00:11:50.830 "name": "BaseBdev3", 00:11:50.830 "uuid": "90f3a4ce-6283-5c28-a08e-2ba7b7858c89", 00:11:50.830 "is_configured": true, 00:11:50.830 "data_offset": 2048, 00:11:50.830 "data_size": 63488 00:11:50.830 }, 00:11:50.830 { 00:11:50.830 "name": "BaseBdev4", 00:11:50.830 "uuid": "1c3bc8a8-d7c0-5cb4-8337-d5cb6bdda20b", 00:11:50.830 "is_configured": true, 00:11:50.830 "data_offset": 2048, 00:11:50.830 "data_size": 63488 00:11:50.830 } 00:11:50.830 ] 00:11:50.830 }' 00:11:50.830 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.830 03:23:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.088 03:23:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.386 [2024-11-05 03:23:04.789015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.321 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.322 03:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.322 03:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.322 03:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.322 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.322 "name": "raid_bdev1", 00:11:52.322 "uuid": "6a45dda0-cf8f-4767-aa68-c665ac920348", 00:11:52.322 "strip_size_kb": 64, 00:11:52.322 "state": "online", 00:11:52.322 "raid_level": "raid0", 00:11:52.322 "superblock": true, 00:11:52.322 "num_base_bdevs": 4, 00:11:52.322 "num_base_bdevs_discovered": 4, 00:11:52.322 "num_base_bdevs_operational": 4, 00:11:52.322 "base_bdevs_list": [ 00:11:52.322 { 00:11:52.322 "name": "BaseBdev1", 00:11:52.322 "uuid": "5cc2f2ed-3644-5bfd-9ae9-0a8086fcd020", 00:11:52.322 "is_configured": true, 00:11:52.322 "data_offset": 2048, 00:11:52.322 "data_size": 63488 00:11:52.322 }, 00:11:52.322 { 00:11:52.322 "name": "BaseBdev2", 00:11:52.322 "uuid": "4b2357fb-0177-58b1-808e-2b0f906351ef", 00:11:52.322 "is_configured": true, 00:11:52.322 "data_offset": 2048, 00:11:52.322 "data_size": 63488 00:11:52.322 }, 00:11:52.322 { 00:11:52.322 "name": "BaseBdev3", 00:11:52.322 "uuid": "90f3a4ce-6283-5c28-a08e-2ba7b7858c89", 00:11:52.322 "is_configured": true, 00:11:52.322 "data_offset": 2048, 00:11:52.322 "data_size": 63488 00:11:52.322 }, 00:11:52.322 { 00:11:52.322 "name": "BaseBdev4", 00:11:52.322 "uuid": "1c3bc8a8-d7c0-5cb4-8337-d5cb6bdda20b", 00:11:52.322 "is_configured": true, 00:11:52.322 "data_offset": 2048, 00:11:52.322 "data_size": 63488 00:11:52.322 } 00:11:52.322 ] 00:11:52.322 }' 00:11:52.322 03:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.322 03:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.580 [2024-11-05 03:23:06.151269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.580 [2024-11-05 03:23:06.151324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.580 [2024-11-05 03:23:06.154761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.580 [2024-11-05 03:23:06.154967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.580 [2024-11-05 03:23:06.155162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.580 [2024-11-05 03:23:06.155416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:52.580 { 00:11:52.580 "results": [ 00:11:52.580 { 00:11:52.580 "job": "raid_bdev1", 00:11:52.580 "core_mask": "0x1", 00:11:52.580 "workload": "randrw", 00:11:52.580 "percentage": 50, 00:11:52.580 "status": "finished", 00:11:52.580 "queue_depth": 1, 00:11:52.580 "io_size": 131072, 00:11:52.580 "runtime": 1.359559, 00:11:52.580 "iops": 10804.238727410873, 00:11:52.580 "mibps": 1350.5298409263592, 00:11:52.580 "io_failed": 1, 00:11:52.580 "io_timeout": 0, 00:11:52.580 "avg_latency_us": 129.24176397054273, 00:11:52.580 "min_latency_us": 43.75272727272727, 00:11:52.580 "max_latency_us": 2040.5527272727272 00:11:52.580 } 00:11:52.580 ], 00:11:52.580 "core_count": 1 00:11:52.580 } 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70851 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 70851 ']' 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 70851 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70851 00:11:52.580 killing process with pid 70851 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70851' 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 70851 00:11:52.580 [2024-11-05 03:23:06.191743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.580 03:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 70851 00:11:53.146 [2024-11-05 03:23:06.481616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mASxGnqBlI 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.083 ************************************ 00:11:54.083 END TEST raid_read_error_test 00:11:54.083 ************************************ 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:54.083 00:11:54.083 real 0m4.732s 00:11:54.083 user 0m5.807s 00:11:54.083 sys 0m0.566s 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.083 03:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.083 03:23:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:54.083 03:23:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:54.083 03:23:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.083 03:23:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.083 ************************************ 00:11:54.083 START TEST raid_write_error_test 00:11:54.083 ************************************ 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.disZXl4PEZ 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70997 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70997 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 70997 ']' 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:54.083 03:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.342 [2024-11-05 03:23:07.727934] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:54.342 [2024-11-05 03:23:07.728114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70997 ] 00:11:54.342 [2024-11-05 03:23:07.908591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.600 [2024-11-05 03:23:08.034864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.601 [2024-11-05 03:23:08.237186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.601 [2024-11-05 03:23:08.237260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.188 BaseBdev1_malloc 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.188 true 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.188 [2024-11-05 03:23:08.728459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:55.188 [2024-11-05 03:23:08.728529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.188 [2024-11-05 03:23:08.728559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:55.188 [2024-11-05 03:23:08.728579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.188 [2024-11-05 03:23:08.731381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.188 [2024-11-05 03:23:08.731584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:55.188 BaseBdev1 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.188 BaseBdev2_malloc 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.188 true 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.188 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.188 [2024-11-05 03:23:08.784177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:55.188 [2024-11-05 03:23:08.784250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.189 [2024-11-05 03:23:08.784278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:55.189 [2024-11-05 03:23:08.784313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.189 [2024-11-05 03:23:08.787051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.189 [2024-11-05 03:23:08.787247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:55.189 BaseBdev2 00:11:55.189 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.189 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.189 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:55.189 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.189 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 BaseBdev3_malloc 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 true 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 [2024-11-05 03:23:08.853388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:55.448 [2024-11-05 03:23:08.853605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.448 [2024-11-05 03:23:08.853658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:55.448 [2024-11-05 03:23:08.853680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.448 [2024-11-05 03:23:08.856457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.448 [2024-11-05 03:23:08.856509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:55.448 BaseBdev3 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 BaseBdev4_malloc 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 true 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 [2024-11-05 03:23:08.909182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:55.448 [2024-11-05 03:23:08.909251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.448 [2024-11-05 03:23:08.909280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:55.448 [2024-11-05 03:23:08.909312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.448 [2024-11-05 03:23:08.912017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.448 [2024-11-05 03:23:08.912202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:55.448 BaseBdev4 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 [2024-11-05 03:23:08.917251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.448 [2024-11-05 03:23:08.919656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.448 [2024-11-05 03:23:08.919762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.448 [2024-11-05 03:23:08.919871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.448 [2024-11-05 03:23:08.920170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:55.448 [2024-11-05 03:23:08.920196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.448 [2024-11-05 03:23:08.920535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:55.448 [2024-11-05 03:23:08.920754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:55.448 [2024-11-05 03:23:08.920781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:55.448 [2024-11-05 03:23:08.920980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.448 "name": "raid_bdev1", 00:11:55.448 "uuid": "e2724dac-d7e2-47a9-b380-146665b9308f", 00:11:55.448 "strip_size_kb": 64, 00:11:55.448 "state": "online", 00:11:55.448 "raid_level": "raid0", 00:11:55.448 "superblock": true, 00:11:55.448 "num_base_bdevs": 4, 00:11:55.448 "num_base_bdevs_discovered": 4, 00:11:55.448 "num_base_bdevs_operational": 4, 00:11:55.448 "base_bdevs_list": [ 00:11:55.448 { 00:11:55.448 "name": "BaseBdev1", 00:11:55.448 "uuid": "d58fcfb4-5625-52bd-8c8f-e74c4ac85bd5", 00:11:55.448 "is_configured": true, 00:11:55.448 "data_offset": 2048, 00:11:55.448 "data_size": 63488 00:11:55.448 }, 00:11:55.448 { 00:11:55.448 "name": "BaseBdev2", 00:11:55.448 "uuid": "0a2a0f05-02d6-5fd0-8e29-371c0496365d", 00:11:55.448 "is_configured": true, 00:11:55.448 "data_offset": 2048, 00:11:55.448 "data_size": 63488 00:11:55.448 }, 00:11:55.448 { 00:11:55.448 "name": "BaseBdev3", 00:11:55.448 "uuid": "e15f3bbe-0e29-59b3-9b12-3199a748ec72", 00:11:55.448 "is_configured": true, 00:11:55.448 "data_offset": 2048, 00:11:55.448 "data_size": 63488 00:11:55.448 }, 00:11:55.448 { 00:11:55.448 "name": "BaseBdev4", 00:11:55.448 "uuid": "6e1062aa-4a2f-5b64-9ebe-a8eaf5ec1602", 00:11:55.448 "is_configured": true, 00:11:55.448 "data_offset": 2048, 00:11:55.448 "data_size": 63488 00:11:55.448 } 00:11:55.448 ] 00:11:55.448 }' 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.448 03:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.016 03:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.016 03:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:56.016 [2024-11-05 03:23:09.550795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.951 "name": "raid_bdev1", 00:11:56.951 "uuid": "e2724dac-d7e2-47a9-b380-146665b9308f", 00:11:56.951 "strip_size_kb": 64, 00:11:56.951 "state": "online", 00:11:56.951 "raid_level": "raid0", 00:11:56.951 "superblock": true, 00:11:56.951 "num_base_bdevs": 4, 00:11:56.951 "num_base_bdevs_discovered": 4, 00:11:56.951 "num_base_bdevs_operational": 4, 00:11:56.951 "base_bdevs_list": [ 00:11:56.951 { 00:11:56.951 "name": "BaseBdev1", 00:11:56.951 "uuid": "d58fcfb4-5625-52bd-8c8f-e74c4ac85bd5", 00:11:56.951 "is_configured": true, 00:11:56.951 "data_offset": 2048, 00:11:56.951 "data_size": 63488 00:11:56.951 }, 00:11:56.951 { 00:11:56.951 "name": "BaseBdev2", 00:11:56.951 "uuid": "0a2a0f05-02d6-5fd0-8e29-371c0496365d", 00:11:56.951 "is_configured": true, 00:11:56.951 "data_offset": 2048, 00:11:56.951 "data_size": 63488 00:11:56.951 }, 00:11:56.951 { 00:11:56.951 "name": "BaseBdev3", 00:11:56.951 "uuid": "e15f3bbe-0e29-59b3-9b12-3199a748ec72", 00:11:56.951 "is_configured": true, 00:11:56.951 "data_offset": 2048, 00:11:56.951 "data_size": 63488 00:11:56.951 }, 00:11:56.951 { 00:11:56.951 "name": "BaseBdev4", 00:11:56.951 "uuid": "6e1062aa-4a2f-5b64-9ebe-a8eaf5ec1602", 00:11:56.951 "is_configured": true, 00:11:56.951 "data_offset": 2048, 00:11:56.951 "data_size": 63488 00:11:56.951 } 00:11:56.951 ] 00:11:56.951 }' 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.951 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.519 [2024-11-05 03:23:10.965853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.519 [2024-11-05 03:23:10.966038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.519 { 00:11:57.519 "results": [ 00:11:57.519 { 00:11:57.519 "job": "raid_bdev1", 00:11:57.519 "core_mask": "0x1", 00:11:57.519 "workload": "randrw", 00:11:57.519 "percentage": 50, 00:11:57.519 "status": "finished", 00:11:57.519 "queue_depth": 1, 00:11:57.519 "io_size": 131072, 00:11:57.519 "runtime": 1.412562, 00:11:57.519 "iops": 10816.516372378699, 00:11:57.519 "mibps": 1352.0645465473374, 00:11:57.519 "io_failed": 1, 00:11:57.519 "io_timeout": 0, 00:11:57.519 "avg_latency_us": 129.01448072346503, 00:11:57.519 "min_latency_us": 42.589090909090906, 00:11:57.519 "max_latency_us": 1869.2654545454545 00:11:57.519 } 00:11:57.519 ], 00:11:57.519 "core_count": 1 00:11:57.519 } 00:11:57.519 [2024-11-05 03:23:10.969354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.519 [2024-11-05 03:23:10.969430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.519 [2024-11-05 03:23:10.969489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.519 [2024-11-05 03:23:10.969509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70997 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 70997 ']' 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 70997 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:57.519 03:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70997 00:11:57.519 killing process with pid 70997 00:11:57.519 03:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:57.519 03:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:57.519 03:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70997' 00:11:57.519 03:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 70997 00:11:57.519 [2024-11-05 03:23:11.006116] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.519 03:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 70997 00:11:57.778 [2024-11-05 03:23:11.298571] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.disZXl4PEZ 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:59.156 00:11:59.156 real 0m4.777s 00:11:59.156 user 0m5.895s 00:11:59.156 sys 0m0.568s 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:59.156 ************************************ 00:11:59.156 END TEST raid_write_error_test 00:11:59.156 ************************************ 00:11:59.156 03:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.156 03:23:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:59.156 03:23:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:59.156 03:23:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:59.156 03:23:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.156 03:23:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.156 ************************************ 00:11:59.156 START TEST raid_state_function_test 00:11:59.156 ************************************ 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:59.156 Process raid pid: 71149 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71149 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71149' 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71149 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71149 ']' 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:59.156 03:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.157 [2024-11-05 03:23:12.545160] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:59.157 [2024-11-05 03:23:12.545681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.157 [2024-11-05 03:23:12.731340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.416 [2024-11-05 03:23:12.860997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.675 [2024-11-05 03:23:13.068704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.675 [2024-11-05 03:23:13.068764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.934 [2024-11-05 03:23:13.516209] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.934 [2024-11-05 03:23:13.516280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.934 [2024-11-05 03:23:13.516297] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.934 [2024-11-05 03:23:13.516330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.934 [2024-11-05 03:23:13.516341] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.934 [2024-11-05 03:23:13.516356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.934 [2024-11-05 03:23:13.516366] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:59.934 [2024-11-05 03:23:13.516380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.934 "name": "Existed_Raid", 00:11:59.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.934 "strip_size_kb": 64, 00:11:59.934 "state": "configuring", 00:11:59.934 "raid_level": "concat", 00:11:59.934 "superblock": false, 00:11:59.934 "num_base_bdevs": 4, 00:11:59.934 "num_base_bdevs_discovered": 0, 00:11:59.934 "num_base_bdevs_operational": 4, 00:11:59.934 "base_bdevs_list": [ 00:11:59.934 { 00:11:59.934 "name": "BaseBdev1", 00:11:59.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.934 "is_configured": false, 00:11:59.934 "data_offset": 0, 00:11:59.934 "data_size": 0 00:11:59.934 }, 00:11:59.934 { 00:11:59.934 "name": "BaseBdev2", 00:11:59.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.934 "is_configured": false, 00:11:59.934 "data_offset": 0, 00:11:59.934 "data_size": 0 00:11:59.934 }, 00:11:59.934 { 00:11:59.934 "name": "BaseBdev3", 00:11:59.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.934 "is_configured": false, 00:11:59.934 "data_offset": 0, 00:11:59.934 "data_size": 0 00:11:59.934 }, 00:11:59.934 { 00:11:59.934 "name": "BaseBdev4", 00:11:59.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.934 "is_configured": false, 00:11:59.934 "data_offset": 0, 00:11:59.934 "data_size": 0 00:11:59.934 } 00:11:59.934 ] 00:11:59.934 }' 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.934 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.502 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.502 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.502 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.502 [2024-11-05 03:23:13.980282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.502 [2024-11-05 03:23:13.980344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.503 [2024-11-05 03:23:13.988267] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.503 [2024-11-05 03:23:13.988327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.503 [2024-11-05 03:23:13.988344] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.503 [2024-11-05 03:23:13.988360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.503 [2024-11-05 03:23:13.988370] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.503 [2024-11-05 03:23:13.988384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.503 [2024-11-05 03:23:13.988393] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.503 [2024-11-05 03:23:13.988407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.503 03:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.503 [2024-11-05 03:23:14.032909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.503 BaseBdev1 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.503 [ 00:12:00.503 { 00:12:00.503 "name": "BaseBdev1", 00:12:00.503 "aliases": [ 00:12:00.503 "45ac1418-6bbc-40fb-83d9-3106099b4215" 00:12:00.503 ], 00:12:00.503 "product_name": "Malloc disk", 00:12:00.503 "block_size": 512, 00:12:00.503 "num_blocks": 65536, 00:12:00.503 "uuid": "45ac1418-6bbc-40fb-83d9-3106099b4215", 00:12:00.503 "assigned_rate_limits": { 00:12:00.503 "rw_ios_per_sec": 0, 00:12:00.503 "rw_mbytes_per_sec": 0, 00:12:00.503 "r_mbytes_per_sec": 0, 00:12:00.503 "w_mbytes_per_sec": 0 00:12:00.503 }, 00:12:00.503 "claimed": true, 00:12:00.503 "claim_type": "exclusive_write", 00:12:00.503 "zoned": false, 00:12:00.503 "supported_io_types": { 00:12:00.503 "read": true, 00:12:00.503 "write": true, 00:12:00.503 "unmap": true, 00:12:00.503 "flush": true, 00:12:00.503 "reset": true, 00:12:00.503 "nvme_admin": false, 00:12:00.503 "nvme_io": false, 00:12:00.503 "nvme_io_md": false, 00:12:00.503 "write_zeroes": true, 00:12:00.503 "zcopy": true, 00:12:00.503 "get_zone_info": false, 00:12:00.503 "zone_management": false, 00:12:00.503 "zone_append": false, 00:12:00.503 "compare": false, 00:12:00.503 "compare_and_write": false, 00:12:00.503 "abort": true, 00:12:00.503 "seek_hole": false, 00:12:00.503 "seek_data": false, 00:12:00.503 "copy": true, 00:12:00.503 "nvme_iov_md": false 00:12:00.503 }, 00:12:00.503 "memory_domains": [ 00:12:00.503 { 00:12:00.503 "dma_device_id": "system", 00:12:00.503 "dma_device_type": 1 00:12:00.503 }, 00:12:00.503 { 00:12:00.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.503 "dma_device_type": 2 00:12:00.503 } 00:12:00.503 ], 00:12:00.503 "driver_specific": {} 00:12:00.503 } 00:12:00.503 ] 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.503 "name": "Existed_Raid", 00:12:00.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.503 "strip_size_kb": 64, 00:12:00.503 "state": "configuring", 00:12:00.503 "raid_level": "concat", 00:12:00.503 "superblock": false, 00:12:00.503 "num_base_bdevs": 4, 00:12:00.503 "num_base_bdevs_discovered": 1, 00:12:00.503 "num_base_bdevs_operational": 4, 00:12:00.503 "base_bdevs_list": [ 00:12:00.503 { 00:12:00.503 "name": "BaseBdev1", 00:12:00.503 "uuid": "45ac1418-6bbc-40fb-83d9-3106099b4215", 00:12:00.503 "is_configured": true, 00:12:00.503 "data_offset": 0, 00:12:00.503 "data_size": 65536 00:12:00.503 }, 00:12:00.503 { 00:12:00.503 "name": "BaseBdev2", 00:12:00.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.503 "is_configured": false, 00:12:00.503 "data_offset": 0, 00:12:00.503 "data_size": 0 00:12:00.503 }, 00:12:00.503 { 00:12:00.503 "name": "BaseBdev3", 00:12:00.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.503 "is_configured": false, 00:12:00.503 "data_offset": 0, 00:12:00.503 "data_size": 0 00:12:00.503 }, 00:12:00.503 { 00:12:00.503 "name": "BaseBdev4", 00:12:00.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.503 "is_configured": false, 00:12:00.503 "data_offset": 0, 00:12:00.503 "data_size": 0 00:12:00.503 } 00:12:00.503 ] 00:12:00.503 }' 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.503 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.072 [2024-11-05 03:23:14.557114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.072 [2024-11-05 03:23:14.557193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.072 [2024-11-05 03:23:14.565154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.072 [2024-11-05 03:23:14.567626] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.072 [2024-11-05 03:23:14.567706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.072 [2024-11-05 03:23:14.567721] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.072 [2024-11-05 03:23:14.567752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.072 [2024-11-05 03:23:14.567761] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.072 [2024-11-05 03:23:14.567773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.072 "name": "Existed_Raid", 00:12:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.072 "strip_size_kb": 64, 00:12:01.072 "state": "configuring", 00:12:01.072 "raid_level": "concat", 00:12:01.072 "superblock": false, 00:12:01.072 "num_base_bdevs": 4, 00:12:01.072 "num_base_bdevs_discovered": 1, 00:12:01.072 "num_base_bdevs_operational": 4, 00:12:01.072 "base_bdevs_list": [ 00:12:01.072 { 00:12:01.072 "name": "BaseBdev1", 00:12:01.072 "uuid": "45ac1418-6bbc-40fb-83d9-3106099b4215", 00:12:01.072 "is_configured": true, 00:12:01.072 "data_offset": 0, 00:12:01.072 "data_size": 65536 00:12:01.072 }, 00:12:01.072 { 00:12:01.072 "name": "BaseBdev2", 00:12:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.072 "is_configured": false, 00:12:01.072 "data_offset": 0, 00:12:01.072 "data_size": 0 00:12:01.072 }, 00:12:01.072 { 00:12:01.072 "name": "BaseBdev3", 00:12:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.072 "is_configured": false, 00:12:01.072 "data_offset": 0, 00:12:01.072 "data_size": 0 00:12:01.072 }, 00:12:01.072 { 00:12:01.072 "name": "BaseBdev4", 00:12:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.072 "is_configured": false, 00:12:01.072 "data_offset": 0, 00:12:01.072 "data_size": 0 00:12:01.072 } 00:12:01.072 ] 00:12:01.072 }' 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.072 03:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.640 [2024-11-05 03:23:15.128557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.640 BaseBdev2 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.640 [ 00:12:01.640 { 00:12:01.640 "name": "BaseBdev2", 00:12:01.640 "aliases": [ 00:12:01.640 "0436a0e6-011c-42b6-8fa8-304dc44dbbbf" 00:12:01.640 ], 00:12:01.640 "product_name": "Malloc disk", 00:12:01.640 "block_size": 512, 00:12:01.640 "num_blocks": 65536, 00:12:01.640 "uuid": "0436a0e6-011c-42b6-8fa8-304dc44dbbbf", 00:12:01.640 "assigned_rate_limits": { 00:12:01.640 "rw_ios_per_sec": 0, 00:12:01.640 "rw_mbytes_per_sec": 0, 00:12:01.640 "r_mbytes_per_sec": 0, 00:12:01.640 "w_mbytes_per_sec": 0 00:12:01.640 }, 00:12:01.640 "claimed": true, 00:12:01.640 "claim_type": "exclusive_write", 00:12:01.640 "zoned": false, 00:12:01.640 "supported_io_types": { 00:12:01.640 "read": true, 00:12:01.640 "write": true, 00:12:01.640 "unmap": true, 00:12:01.640 "flush": true, 00:12:01.640 "reset": true, 00:12:01.640 "nvme_admin": false, 00:12:01.640 "nvme_io": false, 00:12:01.640 "nvme_io_md": false, 00:12:01.640 "write_zeroes": true, 00:12:01.640 "zcopy": true, 00:12:01.640 "get_zone_info": false, 00:12:01.640 "zone_management": false, 00:12:01.640 "zone_append": false, 00:12:01.640 "compare": false, 00:12:01.640 "compare_and_write": false, 00:12:01.640 "abort": true, 00:12:01.640 "seek_hole": false, 00:12:01.640 "seek_data": false, 00:12:01.640 "copy": true, 00:12:01.640 "nvme_iov_md": false 00:12:01.640 }, 00:12:01.640 "memory_domains": [ 00:12:01.640 { 00:12:01.640 "dma_device_id": "system", 00:12:01.640 "dma_device_type": 1 00:12:01.640 }, 00:12:01.640 { 00:12:01.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.640 "dma_device_type": 2 00:12:01.640 } 00:12:01.640 ], 00:12:01.640 "driver_specific": {} 00:12:01.640 } 00:12:01.640 ] 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.640 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.641 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.641 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.641 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.641 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.641 "name": "Existed_Raid", 00:12:01.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.641 "strip_size_kb": 64, 00:12:01.641 "state": "configuring", 00:12:01.641 "raid_level": "concat", 00:12:01.641 "superblock": false, 00:12:01.641 "num_base_bdevs": 4, 00:12:01.641 "num_base_bdevs_discovered": 2, 00:12:01.641 "num_base_bdevs_operational": 4, 00:12:01.641 "base_bdevs_list": [ 00:12:01.641 { 00:12:01.641 "name": "BaseBdev1", 00:12:01.641 "uuid": "45ac1418-6bbc-40fb-83d9-3106099b4215", 00:12:01.641 "is_configured": true, 00:12:01.641 "data_offset": 0, 00:12:01.641 "data_size": 65536 00:12:01.641 }, 00:12:01.641 { 00:12:01.641 "name": "BaseBdev2", 00:12:01.641 "uuid": "0436a0e6-011c-42b6-8fa8-304dc44dbbbf", 00:12:01.641 "is_configured": true, 00:12:01.641 "data_offset": 0, 00:12:01.641 "data_size": 65536 00:12:01.641 }, 00:12:01.641 { 00:12:01.641 "name": "BaseBdev3", 00:12:01.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.641 "is_configured": false, 00:12:01.641 "data_offset": 0, 00:12:01.641 "data_size": 0 00:12:01.641 }, 00:12:01.641 { 00:12:01.641 "name": "BaseBdev4", 00:12:01.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.641 "is_configured": false, 00:12:01.641 "data_offset": 0, 00:12:01.641 "data_size": 0 00:12:01.641 } 00:12:01.641 ] 00:12:01.641 }' 00:12:01.641 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.641 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 [2024-11-05 03:23:15.733784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.210 BaseBdev3 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 [ 00:12:02.210 { 00:12:02.210 "name": "BaseBdev3", 00:12:02.210 "aliases": [ 00:12:02.210 "969146d9-a250-4e82-9826-4fbafdabbb20" 00:12:02.210 ], 00:12:02.210 "product_name": "Malloc disk", 00:12:02.210 "block_size": 512, 00:12:02.210 "num_blocks": 65536, 00:12:02.210 "uuid": "969146d9-a250-4e82-9826-4fbafdabbb20", 00:12:02.210 "assigned_rate_limits": { 00:12:02.210 "rw_ios_per_sec": 0, 00:12:02.210 "rw_mbytes_per_sec": 0, 00:12:02.210 "r_mbytes_per_sec": 0, 00:12:02.210 "w_mbytes_per_sec": 0 00:12:02.210 }, 00:12:02.210 "claimed": true, 00:12:02.210 "claim_type": "exclusive_write", 00:12:02.210 "zoned": false, 00:12:02.210 "supported_io_types": { 00:12:02.210 "read": true, 00:12:02.210 "write": true, 00:12:02.210 "unmap": true, 00:12:02.210 "flush": true, 00:12:02.210 "reset": true, 00:12:02.210 "nvme_admin": false, 00:12:02.210 "nvme_io": false, 00:12:02.210 "nvme_io_md": false, 00:12:02.210 "write_zeroes": true, 00:12:02.210 "zcopy": true, 00:12:02.210 "get_zone_info": false, 00:12:02.210 "zone_management": false, 00:12:02.210 "zone_append": false, 00:12:02.210 "compare": false, 00:12:02.210 "compare_and_write": false, 00:12:02.210 "abort": true, 00:12:02.210 "seek_hole": false, 00:12:02.210 "seek_data": false, 00:12:02.210 "copy": true, 00:12:02.210 "nvme_iov_md": false 00:12:02.210 }, 00:12:02.210 "memory_domains": [ 00:12:02.210 { 00:12:02.210 "dma_device_id": "system", 00:12:02.210 "dma_device_type": 1 00:12:02.210 }, 00:12:02.210 { 00:12:02.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.210 "dma_device_type": 2 00:12:02.210 } 00:12:02.210 ], 00:12:02.210 "driver_specific": {} 00:12:02.210 } 00:12:02.210 ] 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.210 "name": "Existed_Raid", 00:12:02.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.210 "strip_size_kb": 64, 00:12:02.210 "state": "configuring", 00:12:02.210 "raid_level": "concat", 00:12:02.210 "superblock": false, 00:12:02.210 "num_base_bdevs": 4, 00:12:02.210 "num_base_bdevs_discovered": 3, 00:12:02.210 "num_base_bdevs_operational": 4, 00:12:02.210 "base_bdevs_list": [ 00:12:02.210 { 00:12:02.210 "name": "BaseBdev1", 00:12:02.210 "uuid": "45ac1418-6bbc-40fb-83d9-3106099b4215", 00:12:02.210 "is_configured": true, 00:12:02.210 "data_offset": 0, 00:12:02.210 "data_size": 65536 00:12:02.210 }, 00:12:02.210 { 00:12:02.210 "name": "BaseBdev2", 00:12:02.210 "uuid": "0436a0e6-011c-42b6-8fa8-304dc44dbbbf", 00:12:02.210 "is_configured": true, 00:12:02.210 "data_offset": 0, 00:12:02.210 "data_size": 65536 00:12:02.210 }, 00:12:02.210 { 00:12:02.210 "name": "BaseBdev3", 00:12:02.210 "uuid": "969146d9-a250-4e82-9826-4fbafdabbb20", 00:12:02.210 "is_configured": true, 00:12:02.210 "data_offset": 0, 00:12:02.210 "data_size": 65536 00:12:02.210 }, 00:12:02.210 { 00:12:02.210 "name": "BaseBdev4", 00:12:02.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.210 "is_configured": false, 00:12:02.210 "data_offset": 0, 00:12:02.210 "data_size": 0 00:12:02.210 } 00:12:02.210 ] 00:12:02.210 }' 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.210 03:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.778 [2024-11-05 03:23:16.322090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:02.778 [2024-11-05 03:23:16.322164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:02.778 [2024-11-05 03:23:16.322176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:02.778 [2024-11-05 03:23:16.322577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:02.778 [2024-11-05 03:23:16.322824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:02.778 [2024-11-05 03:23:16.322855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:02.778 [2024-11-05 03:23:16.323169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.778 BaseBdev4 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.778 [ 00:12:02.778 { 00:12:02.778 "name": "BaseBdev4", 00:12:02.778 "aliases": [ 00:12:02.778 "b5c09be0-085e-4b11-888e-f78e8d632fbc" 00:12:02.778 ], 00:12:02.778 "product_name": "Malloc disk", 00:12:02.778 "block_size": 512, 00:12:02.778 "num_blocks": 65536, 00:12:02.778 "uuid": "b5c09be0-085e-4b11-888e-f78e8d632fbc", 00:12:02.778 "assigned_rate_limits": { 00:12:02.778 "rw_ios_per_sec": 0, 00:12:02.778 "rw_mbytes_per_sec": 0, 00:12:02.778 "r_mbytes_per_sec": 0, 00:12:02.778 "w_mbytes_per_sec": 0 00:12:02.778 }, 00:12:02.778 "claimed": true, 00:12:02.778 "claim_type": "exclusive_write", 00:12:02.778 "zoned": false, 00:12:02.778 "supported_io_types": { 00:12:02.778 "read": true, 00:12:02.778 "write": true, 00:12:02.778 "unmap": true, 00:12:02.778 "flush": true, 00:12:02.778 "reset": true, 00:12:02.778 "nvme_admin": false, 00:12:02.778 "nvme_io": false, 00:12:02.778 "nvme_io_md": false, 00:12:02.778 "write_zeroes": true, 00:12:02.778 "zcopy": true, 00:12:02.778 "get_zone_info": false, 00:12:02.778 "zone_management": false, 00:12:02.778 "zone_append": false, 00:12:02.778 "compare": false, 00:12:02.778 "compare_and_write": false, 00:12:02.778 "abort": true, 00:12:02.778 "seek_hole": false, 00:12:02.778 "seek_data": false, 00:12:02.778 "copy": true, 00:12:02.778 "nvme_iov_md": false 00:12:02.778 }, 00:12:02.778 "memory_domains": [ 00:12:02.778 { 00:12:02.778 "dma_device_id": "system", 00:12:02.778 "dma_device_type": 1 00:12:02.778 }, 00:12:02.778 { 00:12:02.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.778 "dma_device_type": 2 00:12:02.778 } 00:12:02.778 ], 00:12:02.778 "driver_specific": {} 00:12:02.778 } 00:12:02.778 ] 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.778 "name": "Existed_Raid", 00:12:02.778 "uuid": "bf1a87c1-add7-44a9-8028-0354123066bb", 00:12:02.778 "strip_size_kb": 64, 00:12:02.778 "state": "online", 00:12:02.778 "raid_level": "concat", 00:12:02.778 "superblock": false, 00:12:02.778 "num_base_bdevs": 4, 00:12:02.778 "num_base_bdevs_discovered": 4, 00:12:02.778 "num_base_bdevs_operational": 4, 00:12:02.778 "base_bdevs_list": [ 00:12:02.778 { 00:12:02.778 "name": "BaseBdev1", 00:12:02.778 "uuid": "45ac1418-6bbc-40fb-83d9-3106099b4215", 00:12:02.778 "is_configured": true, 00:12:02.778 "data_offset": 0, 00:12:02.778 "data_size": 65536 00:12:02.778 }, 00:12:02.778 { 00:12:02.778 "name": "BaseBdev2", 00:12:02.778 "uuid": "0436a0e6-011c-42b6-8fa8-304dc44dbbbf", 00:12:02.778 "is_configured": true, 00:12:02.778 "data_offset": 0, 00:12:02.778 "data_size": 65536 00:12:02.778 }, 00:12:02.778 { 00:12:02.778 "name": "BaseBdev3", 00:12:02.778 "uuid": "969146d9-a250-4e82-9826-4fbafdabbb20", 00:12:02.778 "is_configured": true, 00:12:02.778 "data_offset": 0, 00:12:02.778 "data_size": 65536 00:12:02.778 }, 00:12:02.778 { 00:12:02.778 "name": "BaseBdev4", 00:12:02.778 "uuid": "b5c09be0-085e-4b11-888e-f78e8d632fbc", 00:12:02.778 "is_configured": true, 00:12:02.778 "data_offset": 0, 00:12:02.778 "data_size": 65536 00:12:02.778 } 00:12:02.778 ] 00:12:02.778 }' 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.778 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.346 [2024-11-05 03:23:16.878794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.346 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.346 "name": "Existed_Raid", 00:12:03.346 "aliases": [ 00:12:03.346 "bf1a87c1-add7-44a9-8028-0354123066bb" 00:12:03.346 ], 00:12:03.346 "product_name": "Raid Volume", 00:12:03.346 "block_size": 512, 00:12:03.346 "num_blocks": 262144, 00:12:03.346 "uuid": "bf1a87c1-add7-44a9-8028-0354123066bb", 00:12:03.346 "assigned_rate_limits": { 00:12:03.346 "rw_ios_per_sec": 0, 00:12:03.346 "rw_mbytes_per_sec": 0, 00:12:03.346 "r_mbytes_per_sec": 0, 00:12:03.346 "w_mbytes_per_sec": 0 00:12:03.346 }, 00:12:03.346 "claimed": false, 00:12:03.346 "zoned": false, 00:12:03.346 "supported_io_types": { 00:12:03.346 "read": true, 00:12:03.346 "write": true, 00:12:03.346 "unmap": true, 00:12:03.346 "flush": true, 00:12:03.346 "reset": true, 00:12:03.346 "nvme_admin": false, 00:12:03.346 "nvme_io": false, 00:12:03.346 "nvme_io_md": false, 00:12:03.346 "write_zeroes": true, 00:12:03.346 "zcopy": false, 00:12:03.346 "get_zone_info": false, 00:12:03.346 "zone_management": false, 00:12:03.346 "zone_append": false, 00:12:03.346 "compare": false, 00:12:03.346 "compare_and_write": false, 00:12:03.346 "abort": false, 00:12:03.346 "seek_hole": false, 00:12:03.346 "seek_data": false, 00:12:03.346 "copy": false, 00:12:03.346 "nvme_iov_md": false 00:12:03.346 }, 00:12:03.346 "memory_domains": [ 00:12:03.346 { 00:12:03.346 "dma_device_id": "system", 00:12:03.347 "dma_device_type": 1 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.347 "dma_device_type": 2 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "dma_device_id": "system", 00:12:03.347 "dma_device_type": 1 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.347 "dma_device_type": 2 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "dma_device_id": "system", 00:12:03.347 "dma_device_type": 1 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.347 "dma_device_type": 2 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "dma_device_id": "system", 00:12:03.347 "dma_device_type": 1 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.347 "dma_device_type": 2 00:12:03.347 } 00:12:03.347 ], 00:12:03.347 "driver_specific": { 00:12:03.347 "raid": { 00:12:03.347 "uuid": "bf1a87c1-add7-44a9-8028-0354123066bb", 00:12:03.347 "strip_size_kb": 64, 00:12:03.347 "state": "online", 00:12:03.347 "raid_level": "concat", 00:12:03.347 "superblock": false, 00:12:03.347 "num_base_bdevs": 4, 00:12:03.347 "num_base_bdevs_discovered": 4, 00:12:03.347 "num_base_bdevs_operational": 4, 00:12:03.347 "base_bdevs_list": [ 00:12:03.347 { 00:12:03.347 "name": "BaseBdev1", 00:12:03.347 "uuid": "45ac1418-6bbc-40fb-83d9-3106099b4215", 00:12:03.347 "is_configured": true, 00:12:03.347 "data_offset": 0, 00:12:03.347 "data_size": 65536 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "name": "BaseBdev2", 00:12:03.347 "uuid": "0436a0e6-011c-42b6-8fa8-304dc44dbbbf", 00:12:03.347 "is_configured": true, 00:12:03.347 "data_offset": 0, 00:12:03.347 "data_size": 65536 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "name": "BaseBdev3", 00:12:03.347 "uuid": "969146d9-a250-4e82-9826-4fbafdabbb20", 00:12:03.347 "is_configured": true, 00:12:03.347 "data_offset": 0, 00:12:03.347 "data_size": 65536 00:12:03.347 }, 00:12:03.347 { 00:12:03.347 "name": "BaseBdev4", 00:12:03.347 "uuid": "b5c09be0-085e-4b11-888e-f78e8d632fbc", 00:12:03.347 "is_configured": true, 00:12:03.347 "data_offset": 0, 00:12:03.347 "data_size": 65536 00:12:03.347 } 00:12:03.347 ] 00:12:03.347 } 00:12:03.347 } 00:12:03.347 }' 00:12:03.347 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.347 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:03.347 BaseBdev2 00:12:03.347 BaseBdev3 00:12:03.347 BaseBdev4' 00:12:03.347 03:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.606 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.607 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.607 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.607 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.607 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.607 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.607 [2024-11-05 03:23:17.230525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.607 [2024-11-05 03:23:17.230569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.607 [2024-11-05 03:23:17.230636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.866 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.867 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.867 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.867 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.867 "name": "Existed_Raid", 00:12:03.867 "uuid": "bf1a87c1-add7-44a9-8028-0354123066bb", 00:12:03.867 "strip_size_kb": 64, 00:12:03.867 "state": "offline", 00:12:03.867 "raid_level": "concat", 00:12:03.867 "superblock": false, 00:12:03.867 "num_base_bdevs": 4, 00:12:03.867 "num_base_bdevs_discovered": 3, 00:12:03.867 "num_base_bdevs_operational": 3, 00:12:03.867 "base_bdevs_list": [ 00:12:03.867 { 00:12:03.867 "name": null, 00:12:03.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.867 "is_configured": false, 00:12:03.867 "data_offset": 0, 00:12:03.867 "data_size": 65536 00:12:03.867 }, 00:12:03.867 { 00:12:03.867 "name": "BaseBdev2", 00:12:03.867 "uuid": "0436a0e6-011c-42b6-8fa8-304dc44dbbbf", 00:12:03.867 "is_configured": true, 00:12:03.867 "data_offset": 0, 00:12:03.867 "data_size": 65536 00:12:03.867 }, 00:12:03.867 { 00:12:03.867 "name": "BaseBdev3", 00:12:03.867 "uuid": "969146d9-a250-4e82-9826-4fbafdabbb20", 00:12:03.867 "is_configured": true, 00:12:03.867 "data_offset": 0, 00:12:03.867 "data_size": 65536 00:12:03.867 }, 00:12:03.867 { 00:12:03.867 "name": "BaseBdev4", 00:12:03.867 "uuid": "b5c09be0-085e-4b11-888e-f78e8d632fbc", 00:12:03.867 "is_configured": true, 00:12:03.867 "data_offset": 0, 00:12:03.867 "data_size": 65536 00:12:03.867 } 00:12:03.867 ] 00:12:03.867 }' 00:12:03.867 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.867 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.434 [2024-11-05 03:23:17.915090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.434 03:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.434 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.434 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.434 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.435 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:04.435 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.435 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.435 [2024-11-05 03:23:18.047100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.694 [2024-11-05 03:23:18.176911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:04.694 [2024-11-05 03:23:18.176985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.694 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.953 BaseBdev2 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.953 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.954 [ 00:12:04.954 { 00:12:04.954 "name": "BaseBdev2", 00:12:04.954 "aliases": [ 00:12:04.954 "eef51241-b356-4b46-a34a-4385c4a11c09" 00:12:04.954 ], 00:12:04.954 "product_name": "Malloc disk", 00:12:04.954 "block_size": 512, 00:12:04.954 "num_blocks": 65536, 00:12:04.954 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:04.954 "assigned_rate_limits": { 00:12:04.954 "rw_ios_per_sec": 0, 00:12:04.954 "rw_mbytes_per_sec": 0, 00:12:04.954 "r_mbytes_per_sec": 0, 00:12:04.954 "w_mbytes_per_sec": 0 00:12:04.954 }, 00:12:04.954 "claimed": false, 00:12:04.954 "zoned": false, 00:12:04.954 "supported_io_types": { 00:12:04.954 "read": true, 00:12:04.954 "write": true, 00:12:04.954 "unmap": true, 00:12:04.954 "flush": true, 00:12:04.954 "reset": true, 00:12:04.954 "nvme_admin": false, 00:12:04.954 "nvme_io": false, 00:12:04.954 "nvme_io_md": false, 00:12:04.954 "write_zeroes": true, 00:12:04.954 "zcopy": true, 00:12:04.954 "get_zone_info": false, 00:12:04.954 "zone_management": false, 00:12:04.954 "zone_append": false, 00:12:04.954 "compare": false, 00:12:04.954 "compare_and_write": false, 00:12:04.954 "abort": true, 00:12:04.954 "seek_hole": false, 00:12:04.954 "seek_data": false, 00:12:04.954 "copy": true, 00:12:04.954 "nvme_iov_md": false 00:12:04.954 }, 00:12:04.954 "memory_domains": [ 00:12:04.954 { 00:12:04.954 "dma_device_id": "system", 00:12:04.954 "dma_device_type": 1 00:12:04.954 }, 00:12:04.954 { 00:12:04.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.954 "dma_device_type": 2 00:12:04.954 } 00:12:04.954 ], 00:12:04.954 "driver_specific": {} 00:12:04.954 } 00:12:04.954 ] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.954 BaseBdev3 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.954 [ 00:12:04.954 { 00:12:04.954 "name": "BaseBdev3", 00:12:04.954 "aliases": [ 00:12:04.954 "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7" 00:12:04.954 ], 00:12:04.954 "product_name": "Malloc disk", 00:12:04.954 "block_size": 512, 00:12:04.954 "num_blocks": 65536, 00:12:04.954 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:04.954 "assigned_rate_limits": { 00:12:04.954 "rw_ios_per_sec": 0, 00:12:04.954 "rw_mbytes_per_sec": 0, 00:12:04.954 "r_mbytes_per_sec": 0, 00:12:04.954 "w_mbytes_per_sec": 0 00:12:04.954 }, 00:12:04.954 "claimed": false, 00:12:04.954 "zoned": false, 00:12:04.954 "supported_io_types": { 00:12:04.954 "read": true, 00:12:04.954 "write": true, 00:12:04.954 "unmap": true, 00:12:04.954 "flush": true, 00:12:04.954 "reset": true, 00:12:04.954 "nvme_admin": false, 00:12:04.954 "nvme_io": false, 00:12:04.954 "nvme_io_md": false, 00:12:04.954 "write_zeroes": true, 00:12:04.954 "zcopy": true, 00:12:04.954 "get_zone_info": false, 00:12:04.954 "zone_management": false, 00:12:04.954 "zone_append": false, 00:12:04.954 "compare": false, 00:12:04.954 "compare_and_write": false, 00:12:04.954 "abort": true, 00:12:04.954 "seek_hole": false, 00:12:04.954 "seek_data": false, 00:12:04.954 "copy": true, 00:12:04.954 "nvme_iov_md": false 00:12:04.954 }, 00:12:04.954 "memory_domains": [ 00:12:04.954 { 00:12:04.954 "dma_device_id": "system", 00:12:04.954 "dma_device_type": 1 00:12:04.954 }, 00:12:04.954 { 00:12:04.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.954 "dma_device_type": 2 00:12:04.954 } 00:12:04.954 ], 00:12:04.954 "driver_specific": {} 00:12:04.954 } 00:12:04.954 ] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.954 BaseBdev4 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.954 [ 00:12:04.954 { 00:12:04.954 "name": "BaseBdev4", 00:12:04.954 "aliases": [ 00:12:04.954 "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6" 00:12:04.954 ], 00:12:04.954 "product_name": "Malloc disk", 00:12:04.954 "block_size": 512, 00:12:04.954 "num_blocks": 65536, 00:12:04.954 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:04.954 "assigned_rate_limits": { 00:12:04.954 "rw_ios_per_sec": 0, 00:12:04.954 "rw_mbytes_per_sec": 0, 00:12:04.954 "r_mbytes_per_sec": 0, 00:12:04.954 "w_mbytes_per_sec": 0 00:12:04.954 }, 00:12:04.954 "claimed": false, 00:12:04.954 "zoned": false, 00:12:04.954 "supported_io_types": { 00:12:04.954 "read": true, 00:12:04.954 "write": true, 00:12:04.954 "unmap": true, 00:12:04.954 "flush": true, 00:12:04.954 "reset": true, 00:12:04.954 "nvme_admin": false, 00:12:04.954 "nvme_io": false, 00:12:04.954 "nvme_io_md": false, 00:12:04.954 "write_zeroes": true, 00:12:04.954 "zcopy": true, 00:12:04.954 "get_zone_info": false, 00:12:04.954 "zone_management": false, 00:12:04.954 "zone_append": false, 00:12:04.954 "compare": false, 00:12:04.954 "compare_and_write": false, 00:12:04.954 "abort": true, 00:12:04.954 "seek_hole": false, 00:12:04.954 "seek_data": false, 00:12:04.954 "copy": true, 00:12:04.954 "nvme_iov_md": false 00:12:04.954 }, 00:12:04.954 "memory_domains": [ 00:12:04.954 { 00:12:04.954 "dma_device_id": "system", 00:12:04.954 "dma_device_type": 1 00:12:04.954 }, 00:12:04.954 { 00:12:04.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.954 "dma_device_type": 2 00:12:04.954 } 00:12:04.954 ], 00:12:04.954 "driver_specific": {} 00:12:04.954 } 00:12:04.954 ] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.954 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.955 [2024-11-05 03:23:18.527175] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.955 [2024-11-05 03:23:18.527241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.955 [2024-11-05 03:23:18.527271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.955 [2024-11-05 03:23:18.529779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.955 [2024-11-05 03:23:18.529858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.955 "name": "Existed_Raid", 00:12:04.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.955 "strip_size_kb": 64, 00:12:04.955 "state": "configuring", 00:12:04.955 "raid_level": "concat", 00:12:04.955 "superblock": false, 00:12:04.955 "num_base_bdevs": 4, 00:12:04.955 "num_base_bdevs_discovered": 3, 00:12:04.955 "num_base_bdevs_operational": 4, 00:12:04.955 "base_bdevs_list": [ 00:12:04.955 { 00:12:04.955 "name": "BaseBdev1", 00:12:04.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.955 "is_configured": false, 00:12:04.955 "data_offset": 0, 00:12:04.955 "data_size": 0 00:12:04.955 }, 00:12:04.955 { 00:12:04.955 "name": "BaseBdev2", 00:12:04.955 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:04.955 "is_configured": true, 00:12:04.955 "data_offset": 0, 00:12:04.955 "data_size": 65536 00:12:04.955 }, 00:12:04.955 { 00:12:04.955 "name": "BaseBdev3", 00:12:04.955 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:04.955 "is_configured": true, 00:12:04.955 "data_offset": 0, 00:12:04.955 "data_size": 65536 00:12:04.955 }, 00:12:04.955 { 00:12:04.955 "name": "BaseBdev4", 00:12:04.955 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:04.955 "is_configured": true, 00:12:04.955 "data_offset": 0, 00:12:04.955 "data_size": 65536 00:12:04.955 } 00:12:04.955 ] 00:12:04.955 }' 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.955 03:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.524 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:05.524 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.524 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.524 [2024-11-05 03:23:19.047397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.524 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.524 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.525 "name": "Existed_Raid", 00:12:05.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.525 "strip_size_kb": 64, 00:12:05.525 "state": "configuring", 00:12:05.525 "raid_level": "concat", 00:12:05.525 "superblock": false, 00:12:05.525 "num_base_bdevs": 4, 00:12:05.525 "num_base_bdevs_discovered": 2, 00:12:05.525 "num_base_bdevs_operational": 4, 00:12:05.525 "base_bdevs_list": [ 00:12:05.525 { 00:12:05.525 "name": "BaseBdev1", 00:12:05.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.525 "is_configured": false, 00:12:05.525 "data_offset": 0, 00:12:05.525 "data_size": 0 00:12:05.525 }, 00:12:05.525 { 00:12:05.525 "name": null, 00:12:05.525 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:05.525 "is_configured": false, 00:12:05.525 "data_offset": 0, 00:12:05.525 "data_size": 65536 00:12:05.525 }, 00:12:05.525 { 00:12:05.525 "name": "BaseBdev3", 00:12:05.525 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:05.525 "is_configured": true, 00:12:05.525 "data_offset": 0, 00:12:05.525 "data_size": 65536 00:12:05.525 }, 00:12:05.525 { 00:12:05.525 "name": "BaseBdev4", 00:12:05.525 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:05.525 "is_configured": true, 00:12:05.525 "data_offset": 0, 00:12:05.525 "data_size": 65536 00:12:05.525 } 00:12:05.525 ] 00:12:05.525 }' 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.525 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.096 [2024-11-05 03:23:19.644900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.096 BaseBdev1 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:06.096 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.097 [ 00:12:06.097 { 00:12:06.097 "name": "BaseBdev1", 00:12:06.097 "aliases": [ 00:12:06.097 "f305aff1-8b94-42f8-9859-efee7cb33a83" 00:12:06.097 ], 00:12:06.097 "product_name": "Malloc disk", 00:12:06.097 "block_size": 512, 00:12:06.097 "num_blocks": 65536, 00:12:06.097 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:06.097 "assigned_rate_limits": { 00:12:06.097 "rw_ios_per_sec": 0, 00:12:06.097 "rw_mbytes_per_sec": 0, 00:12:06.097 "r_mbytes_per_sec": 0, 00:12:06.097 "w_mbytes_per_sec": 0 00:12:06.097 }, 00:12:06.097 "claimed": true, 00:12:06.097 "claim_type": "exclusive_write", 00:12:06.097 "zoned": false, 00:12:06.097 "supported_io_types": { 00:12:06.097 "read": true, 00:12:06.097 "write": true, 00:12:06.097 "unmap": true, 00:12:06.097 "flush": true, 00:12:06.097 "reset": true, 00:12:06.097 "nvme_admin": false, 00:12:06.097 "nvme_io": false, 00:12:06.097 "nvme_io_md": false, 00:12:06.097 "write_zeroes": true, 00:12:06.097 "zcopy": true, 00:12:06.097 "get_zone_info": false, 00:12:06.097 "zone_management": false, 00:12:06.097 "zone_append": false, 00:12:06.097 "compare": false, 00:12:06.097 "compare_and_write": false, 00:12:06.097 "abort": true, 00:12:06.097 "seek_hole": false, 00:12:06.097 "seek_data": false, 00:12:06.097 "copy": true, 00:12:06.097 "nvme_iov_md": false 00:12:06.097 }, 00:12:06.097 "memory_domains": [ 00:12:06.097 { 00:12:06.097 "dma_device_id": "system", 00:12:06.097 "dma_device_type": 1 00:12:06.097 }, 00:12:06.097 { 00:12:06.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.097 "dma_device_type": 2 00:12:06.097 } 00:12:06.097 ], 00:12:06.097 "driver_specific": {} 00:12:06.097 } 00:12:06.097 ] 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.097 "name": "Existed_Raid", 00:12:06.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.097 "strip_size_kb": 64, 00:12:06.097 "state": "configuring", 00:12:06.097 "raid_level": "concat", 00:12:06.097 "superblock": false, 00:12:06.097 "num_base_bdevs": 4, 00:12:06.097 "num_base_bdevs_discovered": 3, 00:12:06.097 "num_base_bdevs_operational": 4, 00:12:06.097 "base_bdevs_list": [ 00:12:06.097 { 00:12:06.097 "name": "BaseBdev1", 00:12:06.097 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:06.097 "is_configured": true, 00:12:06.097 "data_offset": 0, 00:12:06.097 "data_size": 65536 00:12:06.097 }, 00:12:06.097 { 00:12:06.097 "name": null, 00:12:06.097 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:06.097 "is_configured": false, 00:12:06.097 "data_offset": 0, 00:12:06.097 "data_size": 65536 00:12:06.097 }, 00:12:06.097 { 00:12:06.097 "name": "BaseBdev3", 00:12:06.097 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:06.097 "is_configured": true, 00:12:06.097 "data_offset": 0, 00:12:06.097 "data_size": 65536 00:12:06.097 }, 00:12:06.097 { 00:12:06.097 "name": "BaseBdev4", 00:12:06.097 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:06.097 "is_configured": true, 00:12:06.097 "data_offset": 0, 00:12:06.097 "data_size": 65536 00:12:06.097 } 00:12:06.097 ] 00:12:06.097 }' 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.097 03:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.666 [2024-11-05 03:23:20.265158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.666 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.667 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.926 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.926 "name": "Existed_Raid", 00:12:06.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.926 "strip_size_kb": 64, 00:12:06.926 "state": "configuring", 00:12:06.926 "raid_level": "concat", 00:12:06.926 "superblock": false, 00:12:06.926 "num_base_bdevs": 4, 00:12:06.926 "num_base_bdevs_discovered": 2, 00:12:06.926 "num_base_bdevs_operational": 4, 00:12:06.926 "base_bdevs_list": [ 00:12:06.926 { 00:12:06.926 "name": "BaseBdev1", 00:12:06.926 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:06.926 "is_configured": true, 00:12:06.926 "data_offset": 0, 00:12:06.926 "data_size": 65536 00:12:06.926 }, 00:12:06.926 { 00:12:06.926 "name": null, 00:12:06.926 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:06.926 "is_configured": false, 00:12:06.926 "data_offset": 0, 00:12:06.926 "data_size": 65536 00:12:06.926 }, 00:12:06.926 { 00:12:06.926 "name": null, 00:12:06.926 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:06.926 "is_configured": false, 00:12:06.926 "data_offset": 0, 00:12:06.926 "data_size": 65536 00:12:06.926 }, 00:12:06.926 { 00:12:06.926 "name": "BaseBdev4", 00:12:06.926 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:06.926 "is_configured": true, 00:12:06.926 "data_offset": 0, 00:12:06.926 "data_size": 65536 00:12:06.926 } 00:12:06.926 ] 00:12:06.926 }' 00:12:06.926 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.926 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.185 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.185 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.185 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.185 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.185 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.444 [2024-11-05 03:23:20.837295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.444 "name": "Existed_Raid", 00:12:07.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.444 "strip_size_kb": 64, 00:12:07.444 "state": "configuring", 00:12:07.444 "raid_level": "concat", 00:12:07.444 "superblock": false, 00:12:07.444 "num_base_bdevs": 4, 00:12:07.444 "num_base_bdevs_discovered": 3, 00:12:07.444 "num_base_bdevs_operational": 4, 00:12:07.444 "base_bdevs_list": [ 00:12:07.444 { 00:12:07.444 "name": "BaseBdev1", 00:12:07.444 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:07.444 "is_configured": true, 00:12:07.444 "data_offset": 0, 00:12:07.444 "data_size": 65536 00:12:07.444 }, 00:12:07.444 { 00:12:07.444 "name": null, 00:12:07.444 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:07.444 "is_configured": false, 00:12:07.444 "data_offset": 0, 00:12:07.444 "data_size": 65536 00:12:07.444 }, 00:12:07.444 { 00:12:07.444 "name": "BaseBdev3", 00:12:07.444 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:07.444 "is_configured": true, 00:12:07.444 "data_offset": 0, 00:12:07.444 "data_size": 65536 00:12:07.444 }, 00:12:07.444 { 00:12:07.444 "name": "BaseBdev4", 00:12:07.444 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:07.444 "is_configured": true, 00:12:07.444 "data_offset": 0, 00:12:07.444 "data_size": 65536 00:12:07.444 } 00:12:07.444 ] 00:12:07.444 }' 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.444 03:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.012 [2024-11-05 03:23:21.393527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.012 "name": "Existed_Raid", 00:12:08.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.012 "strip_size_kb": 64, 00:12:08.012 "state": "configuring", 00:12:08.012 "raid_level": "concat", 00:12:08.012 "superblock": false, 00:12:08.012 "num_base_bdevs": 4, 00:12:08.012 "num_base_bdevs_discovered": 2, 00:12:08.012 "num_base_bdevs_operational": 4, 00:12:08.012 "base_bdevs_list": [ 00:12:08.012 { 00:12:08.012 "name": null, 00:12:08.012 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:08.012 "is_configured": false, 00:12:08.012 "data_offset": 0, 00:12:08.012 "data_size": 65536 00:12:08.012 }, 00:12:08.012 { 00:12:08.012 "name": null, 00:12:08.012 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:08.012 "is_configured": false, 00:12:08.012 "data_offset": 0, 00:12:08.012 "data_size": 65536 00:12:08.012 }, 00:12:08.012 { 00:12:08.012 "name": "BaseBdev3", 00:12:08.012 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:08.012 "is_configured": true, 00:12:08.012 "data_offset": 0, 00:12:08.012 "data_size": 65536 00:12:08.012 }, 00:12:08.012 { 00:12:08.012 "name": "BaseBdev4", 00:12:08.012 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:08.012 "is_configured": true, 00:12:08.012 "data_offset": 0, 00:12:08.012 "data_size": 65536 00:12:08.012 } 00:12:08.012 ] 00:12:08.012 }' 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.012 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.579 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.579 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.579 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.579 03:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.579 03:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.579 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.579 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.579 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.579 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.579 [2024-11-05 03:23:22.040237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.579 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.579 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.580 "name": "Existed_Raid", 00:12:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.580 "strip_size_kb": 64, 00:12:08.580 "state": "configuring", 00:12:08.580 "raid_level": "concat", 00:12:08.580 "superblock": false, 00:12:08.580 "num_base_bdevs": 4, 00:12:08.580 "num_base_bdevs_discovered": 3, 00:12:08.580 "num_base_bdevs_operational": 4, 00:12:08.580 "base_bdevs_list": [ 00:12:08.580 { 00:12:08.580 "name": null, 00:12:08.580 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:08.580 "is_configured": false, 00:12:08.580 "data_offset": 0, 00:12:08.580 "data_size": 65536 00:12:08.580 }, 00:12:08.580 { 00:12:08.580 "name": "BaseBdev2", 00:12:08.580 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:08.580 "is_configured": true, 00:12:08.580 "data_offset": 0, 00:12:08.580 "data_size": 65536 00:12:08.580 }, 00:12:08.580 { 00:12:08.580 "name": "BaseBdev3", 00:12:08.580 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:08.580 "is_configured": true, 00:12:08.580 "data_offset": 0, 00:12:08.580 "data_size": 65536 00:12:08.580 }, 00:12:08.580 { 00:12:08.580 "name": "BaseBdev4", 00:12:08.580 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:08.580 "is_configured": true, 00:12:08.580 "data_offset": 0, 00:12:08.580 "data_size": 65536 00:12:08.580 } 00:12:08.580 ] 00:12:08.580 }' 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.580 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f305aff1-8b94-42f8-9859-efee7cb33a83 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.188 [2024-11-05 03:23:22.700169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:09.188 [2024-11-05 03:23:22.700247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.188 [2024-11-05 03:23:22.700258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:09.188 [2024-11-05 03:23:22.700633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:09.188 [2024-11-05 03:23:22.700832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.188 [2024-11-05 03:23:22.700863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:09.188 [2024-11-05 03:23:22.701156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.188 NewBaseBdev 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.188 [ 00:12:09.188 { 00:12:09.188 "name": "NewBaseBdev", 00:12:09.188 "aliases": [ 00:12:09.188 "f305aff1-8b94-42f8-9859-efee7cb33a83" 00:12:09.188 ], 00:12:09.188 "product_name": "Malloc disk", 00:12:09.188 "block_size": 512, 00:12:09.188 "num_blocks": 65536, 00:12:09.188 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:09.188 "assigned_rate_limits": { 00:12:09.188 "rw_ios_per_sec": 0, 00:12:09.188 "rw_mbytes_per_sec": 0, 00:12:09.188 "r_mbytes_per_sec": 0, 00:12:09.188 "w_mbytes_per_sec": 0 00:12:09.188 }, 00:12:09.188 "claimed": true, 00:12:09.188 "claim_type": "exclusive_write", 00:12:09.188 "zoned": false, 00:12:09.188 "supported_io_types": { 00:12:09.188 "read": true, 00:12:09.188 "write": true, 00:12:09.188 "unmap": true, 00:12:09.188 "flush": true, 00:12:09.188 "reset": true, 00:12:09.188 "nvme_admin": false, 00:12:09.188 "nvme_io": false, 00:12:09.188 "nvme_io_md": false, 00:12:09.188 "write_zeroes": true, 00:12:09.188 "zcopy": true, 00:12:09.188 "get_zone_info": false, 00:12:09.188 "zone_management": false, 00:12:09.188 "zone_append": false, 00:12:09.188 "compare": false, 00:12:09.188 "compare_and_write": false, 00:12:09.188 "abort": true, 00:12:09.188 "seek_hole": false, 00:12:09.188 "seek_data": false, 00:12:09.188 "copy": true, 00:12:09.188 "nvme_iov_md": false 00:12:09.188 }, 00:12:09.188 "memory_domains": [ 00:12:09.188 { 00:12:09.188 "dma_device_id": "system", 00:12:09.188 "dma_device_type": 1 00:12:09.188 }, 00:12:09.188 { 00:12:09.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.188 "dma_device_type": 2 00:12:09.188 } 00:12:09.188 ], 00:12:09.188 "driver_specific": {} 00:12:09.188 } 00:12:09.188 ] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.188 "name": "Existed_Raid", 00:12:09.188 "uuid": "994855dc-8aef-41d3-8865-a9b10cc448fe", 00:12:09.188 "strip_size_kb": 64, 00:12:09.188 "state": "online", 00:12:09.188 "raid_level": "concat", 00:12:09.188 "superblock": false, 00:12:09.188 "num_base_bdevs": 4, 00:12:09.188 "num_base_bdevs_discovered": 4, 00:12:09.188 "num_base_bdevs_operational": 4, 00:12:09.188 "base_bdevs_list": [ 00:12:09.188 { 00:12:09.188 "name": "NewBaseBdev", 00:12:09.188 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:09.188 "is_configured": true, 00:12:09.188 "data_offset": 0, 00:12:09.188 "data_size": 65536 00:12:09.188 }, 00:12:09.188 { 00:12:09.188 "name": "BaseBdev2", 00:12:09.188 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:09.188 "is_configured": true, 00:12:09.188 "data_offset": 0, 00:12:09.188 "data_size": 65536 00:12:09.188 }, 00:12:09.188 { 00:12:09.188 "name": "BaseBdev3", 00:12:09.188 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:09.188 "is_configured": true, 00:12:09.188 "data_offset": 0, 00:12:09.188 "data_size": 65536 00:12:09.188 }, 00:12:09.188 { 00:12:09.188 "name": "BaseBdev4", 00:12:09.188 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:09.188 "is_configured": true, 00:12:09.188 "data_offset": 0, 00:12:09.188 "data_size": 65536 00:12:09.188 } 00:12:09.188 ] 00:12:09.188 }' 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.188 03:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.754 [2024-11-05 03:23:23.248905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.754 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.754 "name": "Existed_Raid", 00:12:09.754 "aliases": [ 00:12:09.754 "994855dc-8aef-41d3-8865-a9b10cc448fe" 00:12:09.754 ], 00:12:09.754 "product_name": "Raid Volume", 00:12:09.754 "block_size": 512, 00:12:09.754 "num_blocks": 262144, 00:12:09.754 "uuid": "994855dc-8aef-41d3-8865-a9b10cc448fe", 00:12:09.754 "assigned_rate_limits": { 00:12:09.754 "rw_ios_per_sec": 0, 00:12:09.754 "rw_mbytes_per_sec": 0, 00:12:09.754 "r_mbytes_per_sec": 0, 00:12:09.754 "w_mbytes_per_sec": 0 00:12:09.754 }, 00:12:09.754 "claimed": false, 00:12:09.755 "zoned": false, 00:12:09.755 "supported_io_types": { 00:12:09.755 "read": true, 00:12:09.755 "write": true, 00:12:09.755 "unmap": true, 00:12:09.755 "flush": true, 00:12:09.755 "reset": true, 00:12:09.755 "nvme_admin": false, 00:12:09.755 "nvme_io": false, 00:12:09.755 "nvme_io_md": false, 00:12:09.755 "write_zeroes": true, 00:12:09.755 "zcopy": false, 00:12:09.755 "get_zone_info": false, 00:12:09.755 "zone_management": false, 00:12:09.755 "zone_append": false, 00:12:09.755 "compare": false, 00:12:09.755 "compare_and_write": false, 00:12:09.755 "abort": false, 00:12:09.755 "seek_hole": false, 00:12:09.755 "seek_data": false, 00:12:09.755 "copy": false, 00:12:09.755 "nvme_iov_md": false 00:12:09.755 }, 00:12:09.755 "memory_domains": [ 00:12:09.755 { 00:12:09.755 "dma_device_id": "system", 00:12:09.755 "dma_device_type": 1 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.755 "dma_device_type": 2 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "dma_device_id": "system", 00:12:09.755 "dma_device_type": 1 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.755 "dma_device_type": 2 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "dma_device_id": "system", 00:12:09.755 "dma_device_type": 1 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.755 "dma_device_type": 2 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "dma_device_id": "system", 00:12:09.755 "dma_device_type": 1 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.755 "dma_device_type": 2 00:12:09.755 } 00:12:09.755 ], 00:12:09.755 "driver_specific": { 00:12:09.755 "raid": { 00:12:09.755 "uuid": "994855dc-8aef-41d3-8865-a9b10cc448fe", 00:12:09.755 "strip_size_kb": 64, 00:12:09.755 "state": "online", 00:12:09.755 "raid_level": "concat", 00:12:09.755 "superblock": false, 00:12:09.755 "num_base_bdevs": 4, 00:12:09.755 "num_base_bdevs_discovered": 4, 00:12:09.755 "num_base_bdevs_operational": 4, 00:12:09.755 "base_bdevs_list": [ 00:12:09.755 { 00:12:09.755 "name": "NewBaseBdev", 00:12:09.755 "uuid": "f305aff1-8b94-42f8-9859-efee7cb33a83", 00:12:09.755 "is_configured": true, 00:12:09.755 "data_offset": 0, 00:12:09.755 "data_size": 65536 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "name": "BaseBdev2", 00:12:09.755 "uuid": "eef51241-b356-4b46-a34a-4385c4a11c09", 00:12:09.755 "is_configured": true, 00:12:09.755 "data_offset": 0, 00:12:09.755 "data_size": 65536 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "name": "BaseBdev3", 00:12:09.755 "uuid": "8fdd1712-52a7-4fc1-9e76-2b22c150b1e7", 00:12:09.755 "is_configured": true, 00:12:09.755 "data_offset": 0, 00:12:09.755 "data_size": 65536 00:12:09.755 }, 00:12:09.755 { 00:12:09.755 "name": "BaseBdev4", 00:12:09.755 "uuid": "0cd0d94d-0318-4b6c-a2ac-cb56f03b96b6", 00:12:09.755 "is_configured": true, 00:12:09.755 "data_offset": 0, 00:12:09.755 "data_size": 65536 00:12:09.755 } 00:12:09.755 ] 00:12:09.755 } 00:12:09.755 } 00:12:09.755 }' 00:12:09.755 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.755 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:09.755 BaseBdev2 00:12:09.755 BaseBdev3 00:12:09.755 BaseBdev4' 00:12:09.755 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.013 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.013 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.014 [2024-11-05 03:23:23.600570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.014 [2024-11-05 03:23:23.600611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.014 [2024-11-05 03:23:23.600746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.014 [2024-11-05 03:23:23.600868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.014 [2024-11-05 03:23:23.600885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71149 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71149 ']' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71149 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71149 00:12:10.014 killing process with pid 71149 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71149' 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71149 00:12:10.014 [2024-11-05 03:23:23.638894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.014 03:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71149 00:12:10.580 [2024-11-05 03:23:23.964400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:11.515 00:12:11.515 real 0m12.472s 00:12:11.515 user 0m20.862s 00:12:11.515 sys 0m1.706s 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.515 ************************************ 00:12:11.515 END TEST raid_state_function_test 00:12:11.515 ************************************ 00:12:11.515 03:23:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:11.515 03:23:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:11.515 03:23:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:11.515 03:23:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.515 ************************************ 00:12:11.515 START TEST raid_state_function_test_sb 00:12:11.515 ************************************ 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.515 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71829 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.516 Process raid pid: 71829 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71829' 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71829 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 71829 ']' 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:11.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:11.516 03:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 [2024-11-05 03:23:25.069420] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:11.516 [2024-11-05 03:23:25.069661] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.774 [2024-11-05 03:23:25.257128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.774 [2024-11-05 03:23:25.381202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.033 [2024-11-05 03:23:25.568975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.033 [2024-11-05 03:23:25.569030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.600 [2024-11-05 03:23:26.062100] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.600 [2024-11-05 03:23:26.062197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.600 [2024-11-05 03:23:26.062218] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.600 [2024-11-05 03:23:26.062239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.600 [2024-11-05 03:23:26.062252] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.600 [2024-11-05 03:23:26.062269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.600 [2024-11-05 03:23:26.062281] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.600 [2024-11-05 03:23:26.062328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.600 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.600 "name": "Existed_Raid", 00:12:12.600 "uuid": "f0f98b6c-5304-4dfc-8cf6-c46fff8ba331", 00:12:12.600 "strip_size_kb": 64, 00:12:12.600 "state": "configuring", 00:12:12.600 "raid_level": "concat", 00:12:12.600 "superblock": true, 00:12:12.600 "num_base_bdevs": 4, 00:12:12.600 "num_base_bdevs_discovered": 0, 00:12:12.600 "num_base_bdevs_operational": 4, 00:12:12.600 "base_bdevs_list": [ 00:12:12.600 { 00:12:12.600 "name": "BaseBdev1", 00:12:12.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.600 "is_configured": false, 00:12:12.600 "data_offset": 0, 00:12:12.600 "data_size": 0 00:12:12.600 }, 00:12:12.600 { 00:12:12.600 "name": "BaseBdev2", 00:12:12.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.600 "is_configured": false, 00:12:12.600 "data_offset": 0, 00:12:12.600 "data_size": 0 00:12:12.600 }, 00:12:12.600 { 00:12:12.600 "name": "BaseBdev3", 00:12:12.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.600 "is_configured": false, 00:12:12.600 "data_offset": 0, 00:12:12.600 "data_size": 0 00:12:12.600 }, 00:12:12.600 { 00:12:12.600 "name": "BaseBdev4", 00:12:12.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.600 "is_configured": false, 00:12:12.600 "data_offset": 0, 00:12:12.600 "data_size": 0 00:12:12.600 } 00:12:12.601 ] 00:12:12.601 }' 00:12:12.601 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.601 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.168 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.168 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 [2024-11-05 03:23:26.570238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.168 [2024-11-05 03:23:26.570302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:13.168 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.168 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.168 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.168 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 [2024-11-05 03:23:26.578259] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.168 [2024-11-05 03:23:26.578332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.168 [2024-11-05 03:23:26.578351] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.168 [2024-11-05 03:23:26.578369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.169 [2024-11-05 03:23:26.578379] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.169 [2024-11-05 03:23:26.578393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.169 [2024-11-05 03:23:26.578402] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.169 [2024-11-05 03:23:26.578416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.169 [2024-11-05 03:23:26.622907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.169 BaseBdev1 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.169 [ 00:12:13.169 { 00:12:13.169 "name": "BaseBdev1", 00:12:13.169 "aliases": [ 00:12:13.169 "7db6d7a2-46eb-47ec-9406-8061d925d66b" 00:12:13.169 ], 00:12:13.169 "product_name": "Malloc disk", 00:12:13.169 "block_size": 512, 00:12:13.169 "num_blocks": 65536, 00:12:13.169 "uuid": "7db6d7a2-46eb-47ec-9406-8061d925d66b", 00:12:13.169 "assigned_rate_limits": { 00:12:13.169 "rw_ios_per_sec": 0, 00:12:13.169 "rw_mbytes_per_sec": 0, 00:12:13.169 "r_mbytes_per_sec": 0, 00:12:13.169 "w_mbytes_per_sec": 0 00:12:13.169 }, 00:12:13.169 "claimed": true, 00:12:13.169 "claim_type": "exclusive_write", 00:12:13.169 "zoned": false, 00:12:13.169 "supported_io_types": { 00:12:13.169 "read": true, 00:12:13.169 "write": true, 00:12:13.169 "unmap": true, 00:12:13.169 "flush": true, 00:12:13.169 "reset": true, 00:12:13.169 "nvme_admin": false, 00:12:13.169 "nvme_io": false, 00:12:13.169 "nvme_io_md": false, 00:12:13.169 "write_zeroes": true, 00:12:13.169 "zcopy": true, 00:12:13.169 "get_zone_info": false, 00:12:13.169 "zone_management": false, 00:12:13.169 "zone_append": false, 00:12:13.169 "compare": false, 00:12:13.169 "compare_and_write": false, 00:12:13.169 "abort": true, 00:12:13.169 "seek_hole": false, 00:12:13.169 "seek_data": false, 00:12:13.169 "copy": true, 00:12:13.169 "nvme_iov_md": false 00:12:13.169 }, 00:12:13.169 "memory_domains": [ 00:12:13.169 { 00:12:13.169 "dma_device_id": "system", 00:12:13.169 "dma_device_type": 1 00:12:13.169 }, 00:12:13.169 { 00:12:13.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.169 "dma_device_type": 2 00:12:13.169 } 00:12:13.169 ], 00:12:13.169 "driver_specific": {} 00:12:13.169 } 00:12:13.169 ] 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.169 "name": "Existed_Raid", 00:12:13.169 "uuid": "b7aa04d7-5c66-4f5a-9cbe-9fe1fff2261f", 00:12:13.169 "strip_size_kb": 64, 00:12:13.169 "state": "configuring", 00:12:13.169 "raid_level": "concat", 00:12:13.169 "superblock": true, 00:12:13.169 "num_base_bdevs": 4, 00:12:13.169 "num_base_bdevs_discovered": 1, 00:12:13.169 "num_base_bdevs_operational": 4, 00:12:13.169 "base_bdevs_list": [ 00:12:13.169 { 00:12:13.169 "name": "BaseBdev1", 00:12:13.169 "uuid": "7db6d7a2-46eb-47ec-9406-8061d925d66b", 00:12:13.169 "is_configured": true, 00:12:13.169 "data_offset": 2048, 00:12:13.169 "data_size": 63488 00:12:13.169 }, 00:12:13.169 { 00:12:13.169 "name": "BaseBdev2", 00:12:13.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.169 "is_configured": false, 00:12:13.169 "data_offset": 0, 00:12:13.169 "data_size": 0 00:12:13.169 }, 00:12:13.169 { 00:12:13.169 "name": "BaseBdev3", 00:12:13.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.169 "is_configured": false, 00:12:13.169 "data_offset": 0, 00:12:13.169 "data_size": 0 00:12:13.169 }, 00:12:13.169 { 00:12:13.169 "name": "BaseBdev4", 00:12:13.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.169 "is_configured": false, 00:12:13.169 "data_offset": 0, 00:12:13.169 "data_size": 0 00:12:13.169 } 00:12:13.169 ] 00:12:13.169 }' 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.169 03:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.736 [2024-11-05 03:23:27.147289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.736 [2024-11-05 03:23:27.147394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.736 [2024-11-05 03:23:27.155404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.736 [2024-11-05 03:23:27.158058] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.736 [2024-11-05 03:23:27.158143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.736 [2024-11-05 03:23:27.158159] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.736 [2024-11-05 03:23:27.158176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.736 [2024-11-05 03:23:27.158185] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.736 [2024-11-05 03:23:27.158198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.736 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.736 "name": "Existed_Raid", 00:12:13.736 "uuid": "a53aa875-a73f-484d-b3ab-4333d0a4d7d5", 00:12:13.736 "strip_size_kb": 64, 00:12:13.736 "state": "configuring", 00:12:13.736 "raid_level": "concat", 00:12:13.736 "superblock": true, 00:12:13.736 "num_base_bdevs": 4, 00:12:13.736 "num_base_bdevs_discovered": 1, 00:12:13.736 "num_base_bdevs_operational": 4, 00:12:13.736 "base_bdevs_list": [ 00:12:13.736 { 00:12:13.736 "name": "BaseBdev1", 00:12:13.736 "uuid": "7db6d7a2-46eb-47ec-9406-8061d925d66b", 00:12:13.736 "is_configured": true, 00:12:13.736 "data_offset": 2048, 00:12:13.736 "data_size": 63488 00:12:13.736 }, 00:12:13.736 { 00:12:13.736 "name": "BaseBdev2", 00:12:13.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.736 "is_configured": false, 00:12:13.736 "data_offset": 0, 00:12:13.736 "data_size": 0 00:12:13.736 }, 00:12:13.736 { 00:12:13.736 "name": "BaseBdev3", 00:12:13.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.736 "is_configured": false, 00:12:13.736 "data_offset": 0, 00:12:13.736 "data_size": 0 00:12:13.736 }, 00:12:13.736 { 00:12:13.736 "name": "BaseBdev4", 00:12:13.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.736 "is_configured": false, 00:12:13.736 "data_offset": 0, 00:12:13.736 "data_size": 0 00:12:13.736 } 00:12:13.736 ] 00:12:13.736 }' 00:12:13.737 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.737 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 [2024-11-05 03:23:27.716475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.304 BaseBdev2 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.304 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 [ 00:12:14.304 { 00:12:14.304 "name": "BaseBdev2", 00:12:14.304 "aliases": [ 00:12:14.304 "b045ef6a-f967-4183-949e-2dc3b36d9494" 00:12:14.304 ], 00:12:14.304 "product_name": "Malloc disk", 00:12:14.304 "block_size": 512, 00:12:14.304 "num_blocks": 65536, 00:12:14.304 "uuid": "b045ef6a-f967-4183-949e-2dc3b36d9494", 00:12:14.304 "assigned_rate_limits": { 00:12:14.304 "rw_ios_per_sec": 0, 00:12:14.304 "rw_mbytes_per_sec": 0, 00:12:14.304 "r_mbytes_per_sec": 0, 00:12:14.304 "w_mbytes_per_sec": 0 00:12:14.304 }, 00:12:14.304 "claimed": true, 00:12:14.304 "claim_type": "exclusive_write", 00:12:14.304 "zoned": false, 00:12:14.304 "supported_io_types": { 00:12:14.304 "read": true, 00:12:14.304 "write": true, 00:12:14.304 "unmap": true, 00:12:14.304 "flush": true, 00:12:14.304 "reset": true, 00:12:14.304 "nvme_admin": false, 00:12:14.304 "nvme_io": false, 00:12:14.305 "nvme_io_md": false, 00:12:14.305 "write_zeroes": true, 00:12:14.305 "zcopy": true, 00:12:14.305 "get_zone_info": false, 00:12:14.305 "zone_management": false, 00:12:14.305 "zone_append": false, 00:12:14.305 "compare": false, 00:12:14.305 "compare_and_write": false, 00:12:14.305 "abort": true, 00:12:14.305 "seek_hole": false, 00:12:14.305 "seek_data": false, 00:12:14.305 "copy": true, 00:12:14.305 "nvme_iov_md": false 00:12:14.305 }, 00:12:14.305 "memory_domains": [ 00:12:14.305 { 00:12:14.305 "dma_device_id": "system", 00:12:14.305 "dma_device_type": 1 00:12:14.305 }, 00:12:14.305 { 00:12:14.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.305 "dma_device_type": 2 00:12:14.305 } 00:12:14.305 ], 00:12:14.305 "driver_specific": {} 00:12:14.305 } 00:12:14.305 ] 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.305 "name": "Existed_Raid", 00:12:14.305 "uuid": "a53aa875-a73f-484d-b3ab-4333d0a4d7d5", 00:12:14.305 "strip_size_kb": 64, 00:12:14.305 "state": "configuring", 00:12:14.305 "raid_level": "concat", 00:12:14.305 "superblock": true, 00:12:14.305 "num_base_bdevs": 4, 00:12:14.305 "num_base_bdevs_discovered": 2, 00:12:14.305 "num_base_bdevs_operational": 4, 00:12:14.305 "base_bdevs_list": [ 00:12:14.305 { 00:12:14.305 "name": "BaseBdev1", 00:12:14.305 "uuid": "7db6d7a2-46eb-47ec-9406-8061d925d66b", 00:12:14.305 "is_configured": true, 00:12:14.305 "data_offset": 2048, 00:12:14.305 "data_size": 63488 00:12:14.305 }, 00:12:14.305 { 00:12:14.305 "name": "BaseBdev2", 00:12:14.305 "uuid": "b045ef6a-f967-4183-949e-2dc3b36d9494", 00:12:14.305 "is_configured": true, 00:12:14.305 "data_offset": 2048, 00:12:14.305 "data_size": 63488 00:12:14.305 }, 00:12:14.305 { 00:12:14.305 "name": "BaseBdev3", 00:12:14.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.305 "is_configured": false, 00:12:14.305 "data_offset": 0, 00:12:14.305 "data_size": 0 00:12:14.305 }, 00:12:14.305 { 00:12:14.305 "name": "BaseBdev4", 00:12:14.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.305 "is_configured": false, 00:12:14.305 "data_offset": 0, 00:12:14.305 "data_size": 0 00:12:14.305 } 00:12:14.305 ] 00:12:14.305 }' 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.305 03:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.872 [2024-11-05 03:23:28.334788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.872 BaseBdev3 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.872 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.872 [ 00:12:14.872 { 00:12:14.872 "name": "BaseBdev3", 00:12:14.872 "aliases": [ 00:12:14.872 "d23e4c12-2801-4833-9118-7d07dc157fc5" 00:12:14.872 ], 00:12:14.872 "product_name": "Malloc disk", 00:12:14.872 "block_size": 512, 00:12:14.872 "num_blocks": 65536, 00:12:14.872 "uuid": "d23e4c12-2801-4833-9118-7d07dc157fc5", 00:12:14.872 "assigned_rate_limits": { 00:12:14.872 "rw_ios_per_sec": 0, 00:12:14.872 "rw_mbytes_per_sec": 0, 00:12:14.872 "r_mbytes_per_sec": 0, 00:12:14.872 "w_mbytes_per_sec": 0 00:12:14.872 }, 00:12:14.872 "claimed": true, 00:12:14.872 "claim_type": "exclusive_write", 00:12:14.872 "zoned": false, 00:12:14.872 "supported_io_types": { 00:12:14.872 "read": true, 00:12:14.872 "write": true, 00:12:14.872 "unmap": true, 00:12:14.872 "flush": true, 00:12:14.872 "reset": true, 00:12:14.872 "nvme_admin": false, 00:12:14.872 "nvme_io": false, 00:12:14.872 "nvme_io_md": false, 00:12:14.873 "write_zeroes": true, 00:12:14.873 "zcopy": true, 00:12:14.873 "get_zone_info": false, 00:12:14.873 "zone_management": false, 00:12:14.873 "zone_append": false, 00:12:14.873 "compare": false, 00:12:14.873 "compare_and_write": false, 00:12:14.873 "abort": true, 00:12:14.873 "seek_hole": false, 00:12:14.873 "seek_data": false, 00:12:14.873 "copy": true, 00:12:14.873 "nvme_iov_md": false 00:12:14.873 }, 00:12:14.873 "memory_domains": [ 00:12:14.873 { 00:12:14.873 "dma_device_id": "system", 00:12:14.873 "dma_device_type": 1 00:12:14.873 }, 00:12:14.873 { 00:12:14.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.873 "dma_device_type": 2 00:12:14.873 } 00:12:14.873 ], 00:12:14.873 "driver_specific": {} 00:12:14.873 } 00:12:14.873 ] 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.873 "name": "Existed_Raid", 00:12:14.873 "uuid": "a53aa875-a73f-484d-b3ab-4333d0a4d7d5", 00:12:14.873 "strip_size_kb": 64, 00:12:14.873 "state": "configuring", 00:12:14.873 "raid_level": "concat", 00:12:14.873 "superblock": true, 00:12:14.873 "num_base_bdevs": 4, 00:12:14.873 "num_base_bdevs_discovered": 3, 00:12:14.873 "num_base_bdevs_operational": 4, 00:12:14.873 "base_bdevs_list": [ 00:12:14.873 { 00:12:14.873 "name": "BaseBdev1", 00:12:14.873 "uuid": "7db6d7a2-46eb-47ec-9406-8061d925d66b", 00:12:14.873 "is_configured": true, 00:12:14.873 "data_offset": 2048, 00:12:14.873 "data_size": 63488 00:12:14.873 }, 00:12:14.873 { 00:12:14.873 "name": "BaseBdev2", 00:12:14.873 "uuid": "b045ef6a-f967-4183-949e-2dc3b36d9494", 00:12:14.873 "is_configured": true, 00:12:14.873 "data_offset": 2048, 00:12:14.873 "data_size": 63488 00:12:14.873 }, 00:12:14.873 { 00:12:14.873 "name": "BaseBdev3", 00:12:14.873 "uuid": "d23e4c12-2801-4833-9118-7d07dc157fc5", 00:12:14.873 "is_configured": true, 00:12:14.873 "data_offset": 2048, 00:12:14.873 "data_size": 63488 00:12:14.873 }, 00:12:14.873 { 00:12:14.873 "name": "BaseBdev4", 00:12:14.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.873 "is_configured": false, 00:12:14.873 "data_offset": 0, 00:12:14.873 "data_size": 0 00:12:14.873 } 00:12:14.873 ] 00:12:14.873 }' 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.873 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.472 [2024-11-05 03:23:28.939594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.472 [2024-11-05 03:23:28.939958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:15.472 [2024-11-05 03:23:28.939979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:15.472 BaseBdev4 00:12:15.472 [2024-11-05 03:23:28.940326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:15.472 [2024-11-05 03:23:28.940883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:15.472 [2024-11-05 03:23:28.941273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:15.472 [2024-11-05 03:23:28.941484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.472 [ 00:12:15.472 { 00:12:15.472 "name": "BaseBdev4", 00:12:15.472 "aliases": [ 00:12:15.472 "1c22b9f6-a3f9-4c23-8eae-00aa3f722e36" 00:12:15.472 ], 00:12:15.472 "product_name": "Malloc disk", 00:12:15.472 "block_size": 512, 00:12:15.472 "num_blocks": 65536, 00:12:15.472 "uuid": "1c22b9f6-a3f9-4c23-8eae-00aa3f722e36", 00:12:15.472 "assigned_rate_limits": { 00:12:15.472 "rw_ios_per_sec": 0, 00:12:15.472 "rw_mbytes_per_sec": 0, 00:12:15.472 "r_mbytes_per_sec": 0, 00:12:15.472 "w_mbytes_per_sec": 0 00:12:15.472 }, 00:12:15.472 "claimed": true, 00:12:15.472 "claim_type": "exclusive_write", 00:12:15.472 "zoned": false, 00:12:15.472 "supported_io_types": { 00:12:15.472 "read": true, 00:12:15.472 "write": true, 00:12:15.472 "unmap": true, 00:12:15.472 "flush": true, 00:12:15.472 "reset": true, 00:12:15.472 "nvme_admin": false, 00:12:15.472 "nvme_io": false, 00:12:15.472 "nvme_io_md": false, 00:12:15.472 "write_zeroes": true, 00:12:15.472 "zcopy": true, 00:12:15.472 "get_zone_info": false, 00:12:15.472 "zone_management": false, 00:12:15.472 "zone_append": false, 00:12:15.472 "compare": false, 00:12:15.472 "compare_and_write": false, 00:12:15.472 "abort": true, 00:12:15.472 "seek_hole": false, 00:12:15.472 "seek_data": false, 00:12:15.472 "copy": true, 00:12:15.472 "nvme_iov_md": false 00:12:15.472 }, 00:12:15.472 "memory_domains": [ 00:12:15.472 { 00:12:15.472 "dma_device_id": "system", 00:12:15.472 "dma_device_type": 1 00:12:15.472 }, 00:12:15.472 { 00:12:15.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.472 "dma_device_type": 2 00:12:15.472 } 00:12:15.472 ], 00:12:15.472 "driver_specific": {} 00:12:15.472 } 00:12:15.472 ] 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:15.472 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.473 03:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.473 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.473 "name": "Existed_Raid", 00:12:15.473 "uuid": "a53aa875-a73f-484d-b3ab-4333d0a4d7d5", 00:12:15.473 "strip_size_kb": 64, 00:12:15.473 "state": "online", 00:12:15.473 "raid_level": "concat", 00:12:15.473 "superblock": true, 00:12:15.473 "num_base_bdevs": 4, 00:12:15.473 "num_base_bdevs_discovered": 4, 00:12:15.473 "num_base_bdevs_operational": 4, 00:12:15.473 "base_bdevs_list": [ 00:12:15.473 { 00:12:15.473 "name": "BaseBdev1", 00:12:15.473 "uuid": "7db6d7a2-46eb-47ec-9406-8061d925d66b", 00:12:15.473 "is_configured": true, 00:12:15.473 "data_offset": 2048, 00:12:15.473 "data_size": 63488 00:12:15.473 }, 00:12:15.473 { 00:12:15.473 "name": "BaseBdev2", 00:12:15.473 "uuid": "b045ef6a-f967-4183-949e-2dc3b36d9494", 00:12:15.473 "is_configured": true, 00:12:15.473 "data_offset": 2048, 00:12:15.473 "data_size": 63488 00:12:15.473 }, 00:12:15.473 { 00:12:15.473 "name": "BaseBdev3", 00:12:15.473 "uuid": "d23e4c12-2801-4833-9118-7d07dc157fc5", 00:12:15.473 "is_configured": true, 00:12:15.473 "data_offset": 2048, 00:12:15.473 "data_size": 63488 00:12:15.473 }, 00:12:15.473 { 00:12:15.473 "name": "BaseBdev4", 00:12:15.473 "uuid": "1c22b9f6-a3f9-4c23-8eae-00aa3f722e36", 00:12:15.473 "is_configured": true, 00:12:15.473 "data_offset": 2048, 00:12:15.473 "data_size": 63488 00:12:15.473 } 00:12:15.473 ] 00:12:15.473 }' 00:12:15.473 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.473 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.040 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.041 [2024-11-05 03:23:29.512438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.041 "name": "Existed_Raid", 00:12:16.041 "aliases": [ 00:12:16.041 "a53aa875-a73f-484d-b3ab-4333d0a4d7d5" 00:12:16.041 ], 00:12:16.041 "product_name": "Raid Volume", 00:12:16.041 "block_size": 512, 00:12:16.041 "num_blocks": 253952, 00:12:16.041 "uuid": "a53aa875-a73f-484d-b3ab-4333d0a4d7d5", 00:12:16.041 "assigned_rate_limits": { 00:12:16.041 "rw_ios_per_sec": 0, 00:12:16.041 "rw_mbytes_per_sec": 0, 00:12:16.041 "r_mbytes_per_sec": 0, 00:12:16.041 "w_mbytes_per_sec": 0 00:12:16.041 }, 00:12:16.041 "claimed": false, 00:12:16.041 "zoned": false, 00:12:16.041 "supported_io_types": { 00:12:16.041 "read": true, 00:12:16.041 "write": true, 00:12:16.041 "unmap": true, 00:12:16.041 "flush": true, 00:12:16.041 "reset": true, 00:12:16.041 "nvme_admin": false, 00:12:16.041 "nvme_io": false, 00:12:16.041 "nvme_io_md": false, 00:12:16.041 "write_zeroes": true, 00:12:16.041 "zcopy": false, 00:12:16.041 "get_zone_info": false, 00:12:16.041 "zone_management": false, 00:12:16.041 "zone_append": false, 00:12:16.041 "compare": false, 00:12:16.041 "compare_and_write": false, 00:12:16.041 "abort": false, 00:12:16.041 "seek_hole": false, 00:12:16.041 "seek_data": false, 00:12:16.041 "copy": false, 00:12:16.041 "nvme_iov_md": false 00:12:16.041 }, 00:12:16.041 "memory_domains": [ 00:12:16.041 { 00:12:16.041 "dma_device_id": "system", 00:12:16.041 "dma_device_type": 1 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.041 "dma_device_type": 2 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "dma_device_id": "system", 00:12:16.041 "dma_device_type": 1 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.041 "dma_device_type": 2 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "dma_device_id": "system", 00:12:16.041 "dma_device_type": 1 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.041 "dma_device_type": 2 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "dma_device_id": "system", 00:12:16.041 "dma_device_type": 1 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.041 "dma_device_type": 2 00:12:16.041 } 00:12:16.041 ], 00:12:16.041 "driver_specific": { 00:12:16.041 "raid": { 00:12:16.041 "uuid": "a53aa875-a73f-484d-b3ab-4333d0a4d7d5", 00:12:16.041 "strip_size_kb": 64, 00:12:16.041 "state": "online", 00:12:16.041 "raid_level": "concat", 00:12:16.041 "superblock": true, 00:12:16.041 "num_base_bdevs": 4, 00:12:16.041 "num_base_bdevs_discovered": 4, 00:12:16.041 "num_base_bdevs_operational": 4, 00:12:16.041 "base_bdevs_list": [ 00:12:16.041 { 00:12:16.041 "name": "BaseBdev1", 00:12:16.041 "uuid": "7db6d7a2-46eb-47ec-9406-8061d925d66b", 00:12:16.041 "is_configured": true, 00:12:16.041 "data_offset": 2048, 00:12:16.041 "data_size": 63488 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "name": "BaseBdev2", 00:12:16.041 "uuid": "b045ef6a-f967-4183-949e-2dc3b36d9494", 00:12:16.041 "is_configured": true, 00:12:16.041 "data_offset": 2048, 00:12:16.041 "data_size": 63488 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "name": "BaseBdev3", 00:12:16.041 "uuid": "d23e4c12-2801-4833-9118-7d07dc157fc5", 00:12:16.041 "is_configured": true, 00:12:16.041 "data_offset": 2048, 00:12:16.041 "data_size": 63488 00:12:16.041 }, 00:12:16.041 { 00:12:16.041 "name": "BaseBdev4", 00:12:16.041 "uuid": "1c22b9f6-a3f9-4c23-8eae-00aa3f722e36", 00:12:16.041 "is_configured": true, 00:12:16.041 "data_offset": 2048, 00:12:16.041 "data_size": 63488 00:12:16.041 } 00:12:16.041 ] 00:12:16.041 } 00:12:16.041 } 00:12:16.041 }' 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:16.041 BaseBdev2 00:12:16.041 BaseBdev3 00:12:16.041 BaseBdev4' 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.041 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.300 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.300 [2024-11-05 03:23:29.884285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.300 [2024-11-05 03:23:29.884727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.300 [2024-11-05 03:23:29.884807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.558 03:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.558 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.558 "name": "Existed_Raid", 00:12:16.558 "uuid": "a53aa875-a73f-484d-b3ab-4333d0a4d7d5", 00:12:16.558 "strip_size_kb": 64, 00:12:16.558 "state": "offline", 00:12:16.558 "raid_level": "concat", 00:12:16.558 "superblock": true, 00:12:16.558 "num_base_bdevs": 4, 00:12:16.558 "num_base_bdevs_discovered": 3, 00:12:16.558 "num_base_bdevs_operational": 3, 00:12:16.558 "base_bdevs_list": [ 00:12:16.558 { 00:12:16.558 "name": null, 00:12:16.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.558 "is_configured": false, 00:12:16.558 "data_offset": 0, 00:12:16.558 "data_size": 63488 00:12:16.558 }, 00:12:16.558 { 00:12:16.558 "name": "BaseBdev2", 00:12:16.558 "uuid": "b045ef6a-f967-4183-949e-2dc3b36d9494", 00:12:16.558 "is_configured": true, 00:12:16.558 "data_offset": 2048, 00:12:16.558 "data_size": 63488 00:12:16.558 }, 00:12:16.558 { 00:12:16.558 "name": "BaseBdev3", 00:12:16.558 "uuid": "d23e4c12-2801-4833-9118-7d07dc157fc5", 00:12:16.558 "is_configured": true, 00:12:16.558 "data_offset": 2048, 00:12:16.558 "data_size": 63488 00:12:16.558 }, 00:12:16.558 { 00:12:16.558 "name": "BaseBdev4", 00:12:16.558 "uuid": "1c22b9f6-a3f9-4c23-8eae-00aa3f722e36", 00:12:16.558 "is_configured": true, 00:12:16.558 "data_offset": 2048, 00:12:16.558 "data_size": 63488 00:12:16.558 } 00:12:16.558 ] 00:12:16.558 }' 00:12:16.558 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.558 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.123 [2024-11-05 03:23:30.529599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.123 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.124 [2024-11-05 03:23:30.663237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.124 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.396 [2024-11-05 03:23:30.804231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:17.396 [2024-11-05 03:23:30.804489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.396 BaseBdev2 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.396 03:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.396 [ 00:12:17.396 { 00:12:17.396 "name": "BaseBdev2", 00:12:17.396 "aliases": [ 00:12:17.396 "401dd501-4422-4f39-baa8-564a2d35d710" 00:12:17.396 ], 00:12:17.396 "product_name": "Malloc disk", 00:12:17.396 "block_size": 512, 00:12:17.396 "num_blocks": 65536, 00:12:17.396 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:17.396 "assigned_rate_limits": { 00:12:17.396 "rw_ios_per_sec": 0, 00:12:17.396 "rw_mbytes_per_sec": 0, 00:12:17.396 "r_mbytes_per_sec": 0, 00:12:17.396 "w_mbytes_per_sec": 0 00:12:17.396 }, 00:12:17.396 "claimed": false, 00:12:17.396 "zoned": false, 00:12:17.396 "supported_io_types": { 00:12:17.396 "read": true, 00:12:17.396 "write": true, 00:12:17.396 "unmap": true, 00:12:17.396 "flush": true, 00:12:17.396 "reset": true, 00:12:17.396 "nvme_admin": false, 00:12:17.396 "nvme_io": false, 00:12:17.396 "nvme_io_md": false, 00:12:17.396 "write_zeroes": true, 00:12:17.396 "zcopy": true, 00:12:17.396 "get_zone_info": false, 00:12:17.396 "zone_management": false, 00:12:17.396 "zone_append": false, 00:12:17.396 "compare": false, 00:12:17.396 "compare_and_write": false, 00:12:17.396 "abort": true, 00:12:17.396 "seek_hole": false, 00:12:17.396 "seek_data": false, 00:12:17.396 "copy": true, 00:12:17.396 "nvme_iov_md": false 00:12:17.396 }, 00:12:17.396 "memory_domains": [ 00:12:17.396 { 00:12:17.396 "dma_device_id": "system", 00:12:17.396 "dma_device_type": 1 00:12:17.396 }, 00:12:17.396 { 00:12:17.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.396 "dma_device_type": 2 00:12:17.396 } 00:12:17.396 ], 00:12:17.396 "driver_specific": {} 00:12:17.396 } 00:12:17.396 ] 00:12:17.396 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.396 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:17.396 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.396 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.396 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.396 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.396 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.656 BaseBdev3 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.656 [ 00:12:17.656 { 00:12:17.656 "name": "BaseBdev3", 00:12:17.656 "aliases": [ 00:12:17.656 "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a" 00:12:17.656 ], 00:12:17.656 "product_name": "Malloc disk", 00:12:17.656 "block_size": 512, 00:12:17.656 "num_blocks": 65536, 00:12:17.656 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:17.656 "assigned_rate_limits": { 00:12:17.656 "rw_ios_per_sec": 0, 00:12:17.656 "rw_mbytes_per_sec": 0, 00:12:17.656 "r_mbytes_per_sec": 0, 00:12:17.656 "w_mbytes_per_sec": 0 00:12:17.656 }, 00:12:17.656 "claimed": false, 00:12:17.656 "zoned": false, 00:12:17.656 "supported_io_types": { 00:12:17.656 "read": true, 00:12:17.656 "write": true, 00:12:17.656 "unmap": true, 00:12:17.656 "flush": true, 00:12:17.656 "reset": true, 00:12:17.656 "nvme_admin": false, 00:12:17.656 "nvme_io": false, 00:12:17.656 "nvme_io_md": false, 00:12:17.656 "write_zeroes": true, 00:12:17.656 "zcopy": true, 00:12:17.656 "get_zone_info": false, 00:12:17.656 "zone_management": false, 00:12:17.656 "zone_append": false, 00:12:17.656 "compare": false, 00:12:17.656 "compare_and_write": false, 00:12:17.656 "abort": true, 00:12:17.656 "seek_hole": false, 00:12:17.656 "seek_data": false, 00:12:17.656 "copy": true, 00:12:17.656 "nvme_iov_md": false 00:12:17.656 }, 00:12:17.656 "memory_domains": [ 00:12:17.656 { 00:12:17.656 "dma_device_id": "system", 00:12:17.656 "dma_device_type": 1 00:12:17.656 }, 00:12:17.656 { 00:12:17.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.656 "dma_device_type": 2 00:12:17.656 } 00:12:17.656 ], 00:12:17.656 "driver_specific": {} 00:12:17.656 } 00:12:17.656 ] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.656 BaseBdev4 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.656 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.656 [ 00:12:17.656 { 00:12:17.656 "name": "BaseBdev4", 00:12:17.656 "aliases": [ 00:12:17.656 "6785eaa2-169a-416c-ab03-e56c7035e2a6" 00:12:17.656 ], 00:12:17.656 "product_name": "Malloc disk", 00:12:17.656 "block_size": 512, 00:12:17.656 "num_blocks": 65536, 00:12:17.656 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:17.656 "assigned_rate_limits": { 00:12:17.656 "rw_ios_per_sec": 0, 00:12:17.656 "rw_mbytes_per_sec": 0, 00:12:17.656 "r_mbytes_per_sec": 0, 00:12:17.656 "w_mbytes_per_sec": 0 00:12:17.656 }, 00:12:17.656 "claimed": false, 00:12:17.656 "zoned": false, 00:12:17.656 "supported_io_types": { 00:12:17.656 "read": true, 00:12:17.656 "write": true, 00:12:17.656 "unmap": true, 00:12:17.656 "flush": true, 00:12:17.656 "reset": true, 00:12:17.656 "nvme_admin": false, 00:12:17.656 "nvme_io": false, 00:12:17.656 "nvme_io_md": false, 00:12:17.656 "write_zeroes": true, 00:12:17.656 "zcopy": true, 00:12:17.656 "get_zone_info": false, 00:12:17.656 "zone_management": false, 00:12:17.656 "zone_append": false, 00:12:17.656 "compare": false, 00:12:17.656 "compare_and_write": false, 00:12:17.656 "abort": true, 00:12:17.656 "seek_hole": false, 00:12:17.656 "seek_data": false, 00:12:17.656 "copy": true, 00:12:17.656 "nvme_iov_md": false 00:12:17.657 }, 00:12:17.657 "memory_domains": [ 00:12:17.657 { 00:12:17.657 "dma_device_id": "system", 00:12:17.657 "dma_device_type": 1 00:12:17.657 }, 00:12:17.657 { 00:12:17.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.657 "dma_device_type": 2 00:12:17.657 } 00:12:17.657 ], 00:12:17.657 "driver_specific": {} 00:12:17.657 } 00:12:17.657 ] 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.657 [2024-11-05 03:23:31.168485] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.657 [2024-11-05 03:23:31.168869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.657 [2024-11-05 03:23:31.169091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.657 [2024-11-05 03:23:31.171731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.657 [2024-11-05 03:23:31.171800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.657 "name": "Existed_Raid", 00:12:17.657 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:17.657 "strip_size_kb": 64, 00:12:17.657 "state": "configuring", 00:12:17.657 "raid_level": "concat", 00:12:17.657 "superblock": true, 00:12:17.657 "num_base_bdevs": 4, 00:12:17.657 "num_base_bdevs_discovered": 3, 00:12:17.657 "num_base_bdevs_operational": 4, 00:12:17.657 "base_bdevs_list": [ 00:12:17.657 { 00:12:17.657 "name": "BaseBdev1", 00:12:17.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.657 "is_configured": false, 00:12:17.657 "data_offset": 0, 00:12:17.657 "data_size": 0 00:12:17.657 }, 00:12:17.657 { 00:12:17.657 "name": "BaseBdev2", 00:12:17.657 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:17.657 "is_configured": true, 00:12:17.657 "data_offset": 2048, 00:12:17.657 "data_size": 63488 00:12:17.657 }, 00:12:17.657 { 00:12:17.657 "name": "BaseBdev3", 00:12:17.657 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:17.657 "is_configured": true, 00:12:17.657 "data_offset": 2048, 00:12:17.657 "data_size": 63488 00:12:17.657 }, 00:12:17.657 { 00:12:17.657 "name": "BaseBdev4", 00:12:17.657 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:17.657 "is_configured": true, 00:12:17.657 "data_offset": 2048, 00:12:17.657 "data_size": 63488 00:12:17.657 } 00:12:17.657 ] 00:12:17.657 }' 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.657 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.224 [2024-11-05 03:23:31.704618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.224 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.225 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.225 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.225 "name": "Existed_Raid", 00:12:18.225 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:18.225 "strip_size_kb": 64, 00:12:18.225 "state": "configuring", 00:12:18.225 "raid_level": "concat", 00:12:18.225 "superblock": true, 00:12:18.225 "num_base_bdevs": 4, 00:12:18.225 "num_base_bdevs_discovered": 2, 00:12:18.225 "num_base_bdevs_operational": 4, 00:12:18.225 "base_bdevs_list": [ 00:12:18.225 { 00:12:18.225 "name": "BaseBdev1", 00:12:18.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.225 "is_configured": false, 00:12:18.225 "data_offset": 0, 00:12:18.225 "data_size": 0 00:12:18.225 }, 00:12:18.225 { 00:12:18.225 "name": null, 00:12:18.225 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:18.225 "is_configured": false, 00:12:18.225 "data_offset": 0, 00:12:18.225 "data_size": 63488 00:12:18.225 }, 00:12:18.225 { 00:12:18.225 "name": "BaseBdev3", 00:12:18.225 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:18.225 "is_configured": true, 00:12:18.225 "data_offset": 2048, 00:12:18.225 "data_size": 63488 00:12:18.225 }, 00:12:18.225 { 00:12:18.225 "name": "BaseBdev4", 00:12:18.225 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:18.225 "is_configured": true, 00:12:18.225 "data_offset": 2048, 00:12:18.225 "data_size": 63488 00:12:18.225 } 00:12:18.225 ] 00:12:18.225 }' 00:12:18.225 03:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.225 03:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 [2024-11-05 03:23:32.306973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.802 BaseBdev1 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 [ 00:12:18.802 { 00:12:18.802 "name": "BaseBdev1", 00:12:18.802 "aliases": [ 00:12:18.802 "56cc45e5-8f45-45c3-a839-6059cfae07bd" 00:12:18.802 ], 00:12:18.802 "product_name": "Malloc disk", 00:12:18.802 "block_size": 512, 00:12:18.802 "num_blocks": 65536, 00:12:18.802 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:18.802 "assigned_rate_limits": { 00:12:18.802 "rw_ios_per_sec": 0, 00:12:18.802 "rw_mbytes_per_sec": 0, 00:12:18.802 "r_mbytes_per_sec": 0, 00:12:18.802 "w_mbytes_per_sec": 0 00:12:18.802 }, 00:12:18.802 "claimed": true, 00:12:18.802 "claim_type": "exclusive_write", 00:12:18.802 "zoned": false, 00:12:18.802 "supported_io_types": { 00:12:18.802 "read": true, 00:12:18.802 "write": true, 00:12:18.802 "unmap": true, 00:12:18.802 "flush": true, 00:12:18.802 "reset": true, 00:12:18.802 "nvme_admin": false, 00:12:18.802 "nvme_io": false, 00:12:18.802 "nvme_io_md": false, 00:12:18.802 "write_zeroes": true, 00:12:18.802 "zcopy": true, 00:12:18.802 "get_zone_info": false, 00:12:18.802 "zone_management": false, 00:12:18.802 "zone_append": false, 00:12:18.802 "compare": false, 00:12:18.802 "compare_and_write": false, 00:12:18.802 "abort": true, 00:12:18.802 "seek_hole": false, 00:12:18.802 "seek_data": false, 00:12:18.802 "copy": true, 00:12:18.802 "nvme_iov_md": false 00:12:18.802 }, 00:12:18.802 "memory_domains": [ 00:12:18.802 { 00:12:18.802 "dma_device_id": "system", 00:12:18.802 "dma_device_type": 1 00:12:18.802 }, 00:12:18.802 { 00:12:18.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.802 "dma_device_type": 2 00:12:18.802 } 00:12:18.802 ], 00:12:18.802 "driver_specific": {} 00:12:18.802 } 00:12:18.802 ] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.802 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.802 "name": "Existed_Raid", 00:12:18.802 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:18.802 "strip_size_kb": 64, 00:12:18.802 "state": "configuring", 00:12:18.802 "raid_level": "concat", 00:12:18.802 "superblock": true, 00:12:18.802 "num_base_bdevs": 4, 00:12:18.802 "num_base_bdevs_discovered": 3, 00:12:18.802 "num_base_bdevs_operational": 4, 00:12:18.802 "base_bdevs_list": [ 00:12:18.802 { 00:12:18.802 "name": "BaseBdev1", 00:12:18.802 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:18.802 "is_configured": true, 00:12:18.802 "data_offset": 2048, 00:12:18.802 "data_size": 63488 00:12:18.802 }, 00:12:18.802 { 00:12:18.802 "name": null, 00:12:18.802 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:18.802 "is_configured": false, 00:12:18.802 "data_offset": 0, 00:12:18.802 "data_size": 63488 00:12:18.802 }, 00:12:18.802 { 00:12:18.802 "name": "BaseBdev3", 00:12:18.803 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:18.803 "is_configured": true, 00:12:18.803 "data_offset": 2048, 00:12:18.803 "data_size": 63488 00:12:18.803 }, 00:12:18.803 { 00:12:18.803 "name": "BaseBdev4", 00:12:18.803 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:18.803 "is_configured": true, 00:12:18.803 "data_offset": 2048, 00:12:18.803 "data_size": 63488 00:12:18.803 } 00:12:18.803 ] 00:12:18.803 }' 00:12:18.803 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.803 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 [2024-11-05 03:23:32.919275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.385 "name": "Existed_Raid", 00:12:19.385 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:19.385 "strip_size_kb": 64, 00:12:19.385 "state": "configuring", 00:12:19.385 "raid_level": "concat", 00:12:19.385 "superblock": true, 00:12:19.385 "num_base_bdevs": 4, 00:12:19.385 "num_base_bdevs_discovered": 2, 00:12:19.385 "num_base_bdevs_operational": 4, 00:12:19.385 "base_bdevs_list": [ 00:12:19.385 { 00:12:19.385 "name": "BaseBdev1", 00:12:19.385 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:19.385 "is_configured": true, 00:12:19.385 "data_offset": 2048, 00:12:19.385 "data_size": 63488 00:12:19.385 }, 00:12:19.385 { 00:12:19.385 "name": null, 00:12:19.385 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:19.385 "is_configured": false, 00:12:19.385 "data_offset": 0, 00:12:19.385 "data_size": 63488 00:12:19.385 }, 00:12:19.385 { 00:12:19.385 "name": null, 00:12:19.385 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:19.385 "is_configured": false, 00:12:19.385 "data_offset": 0, 00:12:19.385 "data_size": 63488 00:12:19.385 }, 00:12:19.385 { 00:12:19.385 "name": "BaseBdev4", 00:12:19.385 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:19.385 "is_configured": true, 00:12:19.385 "data_offset": 2048, 00:12:19.385 "data_size": 63488 00:12:19.385 } 00:12:19.385 ] 00:12:19.385 }' 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.385 03:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.952 [2024-11-05 03:23:33.495519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.952 "name": "Existed_Raid", 00:12:19.952 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:19.952 "strip_size_kb": 64, 00:12:19.952 "state": "configuring", 00:12:19.952 "raid_level": "concat", 00:12:19.952 "superblock": true, 00:12:19.952 "num_base_bdevs": 4, 00:12:19.952 "num_base_bdevs_discovered": 3, 00:12:19.952 "num_base_bdevs_operational": 4, 00:12:19.952 "base_bdevs_list": [ 00:12:19.952 { 00:12:19.952 "name": "BaseBdev1", 00:12:19.952 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:19.952 "is_configured": true, 00:12:19.952 "data_offset": 2048, 00:12:19.952 "data_size": 63488 00:12:19.952 }, 00:12:19.952 { 00:12:19.952 "name": null, 00:12:19.952 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:19.952 "is_configured": false, 00:12:19.952 "data_offset": 0, 00:12:19.952 "data_size": 63488 00:12:19.952 }, 00:12:19.952 { 00:12:19.952 "name": "BaseBdev3", 00:12:19.952 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:19.952 "is_configured": true, 00:12:19.952 "data_offset": 2048, 00:12:19.952 "data_size": 63488 00:12:19.952 }, 00:12:19.952 { 00:12:19.952 "name": "BaseBdev4", 00:12:19.952 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:19.952 "is_configured": true, 00:12:19.952 "data_offset": 2048, 00:12:19.952 "data_size": 63488 00:12:19.952 } 00:12:19.952 ] 00:12:19.952 }' 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.952 03:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.520 [2024-11-05 03:23:34.075709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.520 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.778 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.779 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.779 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.779 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.779 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.779 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.779 "name": "Existed_Raid", 00:12:20.779 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:20.779 "strip_size_kb": 64, 00:12:20.779 "state": "configuring", 00:12:20.779 "raid_level": "concat", 00:12:20.779 "superblock": true, 00:12:20.779 "num_base_bdevs": 4, 00:12:20.779 "num_base_bdevs_discovered": 2, 00:12:20.779 "num_base_bdevs_operational": 4, 00:12:20.779 "base_bdevs_list": [ 00:12:20.779 { 00:12:20.779 "name": null, 00:12:20.779 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:20.779 "is_configured": false, 00:12:20.779 "data_offset": 0, 00:12:20.779 "data_size": 63488 00:12:20.779 }, 00:12:20.779 { 00:12:20.779 "name": null, 00:12:20.779 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:20.779 "is_configured": false, 00:12:20.779 "data_offset": 0, 00:12:20.779 "data_size": 63488 00:12:20.779 }, 00:12:20.779 { 00:12:20.779 "name": "BaseBdev3", 00:12:20.779 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:20.779 "is_configured": true, 00:12:20.779 "data_offset": 2048, 00:12:20.779 "data_size": 63488 00:12:20.779 }, 00:12:20.779 { 00:12:20.779 "name": "BaseBdev4", 00:12:20.779 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:20.779 "is_configured": true, 00:12:20.779 "data_offset": 2048, 00:12:20.779 "data_size": 63488 00:12:20.779 } 00:12:20.779 ] 00:12:20.779 }' 00:12:20.779 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.779 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.347 [2024-11-05 03:23:34.739380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.347 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.347 "name": "Existed_Raid", 00:12:21.347 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:21.347 "strip_size_kb": 64, 00:12:21.347 "state": "configuring", 00:12:21.347 "raid_level": "concat", 00:12:21.347 "superblock": true, 00:12:21.347 "num_base_bdevs": 4, 00:12:21.347 "num_base_bdevs_discovered": 3, 00:12:21.347 "num_base_bdevs_operational": 4, 00:12:21.347 "base_bdevs_list": [ 00:12:21.347 { 00:12:21.347 "name": null, 00:12:21.347 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:21.347 "is_configured": false, 00:12:21.347 "data_offset": 0, 00:12:21.347 "data_size": 63488 00:12:21.347 }, 00:12:21.347 { 00:12:21.347 "name": "BaseBdev2", 00:12:21.347 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:21.347 "is_configured": true, 00:12:21.347 "data_offset": 2048, 00:12:21.347 "data_size": 63488 00:12:21.347 }, 00:12:21.347 { 00:12:21.347 "name": "BaseBdev3", 00:12:21.347 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:21.347 "is_configured": true, 00:12:21.347 "data_offset": 2048, 00:12:21.347 "data_size": 63488 00:12:21.347 }, 00:12:21.348 { 00:12:21.348 "name": "BaseBdev4", 00:12:21.348 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:21.348 "is_configured": true, 00:12:21.348 "data_offset": 2048, 00:12:21.348 "data_size": 63488 00:12:21.348 } 00:12:21.348 ] 00:12:21.348 }' 00:12:21.348 03:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.348 03:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 56cc45e5-8f45-45c3-a839-6059cfae07bd 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 [2024-11-05 03:23:35.389238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:21.916 [2024-11-05 03:23:35.389608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.916 [2024-11-05 03:23:35.389627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:21.916 NewBaseBdev 00:12:21.916 [2024-11-05 03:23:35.390205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.916 [2024-11-05 03:23:35.390710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.916 [2024-11-05 03:23:35.390748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:21.916 [2024-11-05 03:23:35.390909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 [ 00:12:21.916 { 00:12:21.916 "name": "NewBaseBdev", 00:12:21.916 "aliases": [ 00:12:21.916 "56cc45e5-8f45-45c3-a839-6059cfae07bd" 00:12:21.916 ], 00:12:21.916 "product_name": "Malloc disk", 00:12:21.916 "block_size": 512, 00:12:21.916 "num_blocks": 65536, 00:12:21.916 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:21.916 "assigned_rate_limits": { 00:12:21.916 "rw_ios_per_sec": 0, 00:12:21.916 "rw_mbytes_per_sec": 0, 00:12:21.916 "r_mbytes_per_sec": 0, 00:12:21.916 "w_mbytes_per_sec": 0 00:12:21.916 }, 00:12:21.916 "claimed": true, 00:12:21.916 "claim_type": "exclusive_write", 00:12:21.916 "zoned": false, 00:12:21.916 "supported_io_types": { 00:12:21.916 "read": true, 00:12:21.916 "write": true, 00:12:21.916 "unmap": true, 00:12:21.916 "flush": true, 00:12:21.916 "reset": true, 00:12:21.916 "nvme_admin": false, 00:12:21.916 "nvme_io": false, 00:12:21.916 "nvme_io_md": false, 00:12:21.916 "write_zeroes": true, 00:12:21.916 "zcopy": true, 00:12:21.916 "get_zone_info": false, 00:12:21.916 "zone_management": false, 00:12:21.916 "zone_append": false, 00:12:21.916 "compare": false, 00:12:21.916 "compare_and_write": false, 00:12:21.916 "abort": true, 00:12:21.916 "seek_hole": false, 00:12:21.916 "seek_data": false, 00:12:21.916 "copy": true, 00:12:21.916 "nvme_iov_md": false 00:12:21.916 }, 00:12:21.916 "memory_domains": [ 00:12:21.916 { 00:12:21.916 "dma_device_id": "system", 00:12:21.916 "dma_device_type": 1 00:12:21.916 }, 00:12:21.916 { 00:12:21.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.916 "dma_device_type": 2 00:12:21.916 } 00:12:21.916 ], 00:12:21.916 "driver_specific": {} 00:12:21.916 } 00:12:21.916 ] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.916 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.916 "name": "Existed_Raid", 00:12:21.916 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:21.916 "strip_size_kb": 64, 00:12:21.916 "state": "online", 00:12:21.916 "raid_level": "concat", 00:12:21.916 "superblock": true, 00:12:21.916 "num_base_bdevs": 4, 00:12:21.916 "num_base_bdevs_discovered": 4, 00:12:21.916 "num_base_bdevs_operational": 4, 00:12:21.916 "base_bdevs_list": [ 00:12:21.916 { 00:12:21.916 "name": "NewBaseBdev", 00:12:21.916 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:21.916 "is_configured": true, 00:12:21.916 "data_offset": 2048, 00:12:21.916 "data_size": 63488 00:12:21.916 }, 00:12:21.916 { 00:12:21.916 "name": "BaseBdev2", 00:12:21.916 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:21.917 "is_configured": true, 00:12:21.917 "data_offset": 2048, 00:12:21.917 "data_size": 63488 00:12:21.917 }, 00:12:21.917 { 00:12:21.917 "name": "BaseBdev3", 00:12:21.917 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:21.917 "is_configured": true, 00:12:21.917 "data_offset": 2048, 00:12:21.917 "data_size": 63488 00:12:21.917 }, 00:12:21.917 { 00:12:21.917 "name": "BaseBdev4", 00:12:21.917 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:21.917 "is_configured": true, 00:12:21.917 "data_offset": 2048, 00:12:21.917 "data_size": 63488 00:12:21.917 } 00:12:21.917 ] 00:12:21.917 }' 00:12:21.917 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.917 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 [2024-11-05 03:23:35.970113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.485 03:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.485 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.485 "name": "Existed_Raid", 00:12:22.485 "aliases": [ 00:12:22.485 "6770cbea-34c8-4f16-ab2f-4dbd001c91a0" 00:12:22.485 ], 00:12:22.485 "product_name": "Raid Volume", 00:12:22.485 "block_size": 512, 00:12:22.485 "num_blocks": 253952, 00:12:22.485 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:22.485 "assigned_rate_limits": { 00:12:22.485 "rw_ios_per_sec": 0, 00:12:22.485 "rw_mbytes_per_sec": 0, 00:12:22.485 "r_mbytes_per_sec": 0, 00:12:22.485 "w_mbytes_per_sec": 0 00:12:22.485 }, 00:12:22.485 "claimed": false, 00:12:22.485 "zoned": false, 00:12:22.485 "supported_io_types": { 00:12:22.485 "read": true, 00:12:22.485 "write": true, 00:12:22.485 "unmap": true, 00:12:22.485 "flush": true, 00:12:22.485 "reset": true, 00:12:22.485 "nvme_admin": false, 00:12:22.485 "nvme_io": false, 00:12:22.485 "nvme_io_md": false, 00:12:22.485 "write_zeroes": true, 00:12:22.485 "zcopy": false, 00:12:22.485 "get_zone_info": false, 00:12:22.485 "zone_management": false, 00:12:22.485 "zone_append": false, 00:12:22.485 "compare": false, 00:12:22.485 "compare_and_write": false, 00:12:22.485 "abort": false, 00:12:22.485 "seek_hole": false, 00:12:22.485 "seek_data": false, 00:12:22.485 "copy": false, 00:12:22.485 "nvme_iov_md": false 00:12:22.485 }, 00:12:22.485 "memory_domains": [ 00:12:22.485 { 00:12:22.485 "dma_device_id": "system", 00:12:22.485 "dma_device_type": 1 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.485 "dma_device_type": 2 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "dma_device_id": "system", 00:12:22.485 "dma_device_type": 1 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.485 "dma_device_type": 2 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "dma_device_id": "system", 00:12:22.485 "dma_device_type": 1 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.485 "dma_device_type": 2 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "dma_device_id": "system", 00:12:22.485 "dma_device_type": 1 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.485 "dma_device_type": 2 00:12:22.485 } 00:12:22.485 ], 00:12:22.485 "driver_specific": { 00:12:22.485 "raid": { 00:12:22.485 "uuid": "6770cbea-34c8-4f16-ab2f-4dbd001c91a0", 00:12:22.485 "strip_size_kb": 64, 00:12:22.485 "state": "online", 00:12:22.485 "raid_level": "concat", 00:12:22.485 "superblock": true, 00:12:22.485 "num_base_bdevs": 4, 00:12:22.485 "num_base_bdevs_discovered": 4, 00:12:22.485 "num_base_bdevs_operational": 4, 00:12:22.485 "base_bdevs_list": [ 00:12:22.485 { 00:12:22.485 "name": "NewBaseBdev", 00:12:22.485 "uuid": "56cc45e5-8f45-45c3-a839-6059cfae07bd", 00:12:22.485 "is_configured": true, 00:12:22.485 "data_offset": 2048, 00:12:22.485 "data_size": 63488 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "name": "BaseBdev2", 00:12:22.485 "uuid": "401dd501-4422-4f39-baa8-564a2d35d710", 00:12:22.485 "is_configured": true, 00:12:22.485 "data_offset": 2048, 00:12:22.485 "data_size": 63488 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "name": "BaseBdev3", 00:12:22.485 "uuid": "fb6c4498-6ef7-4542-86c9-dbd57ca64a6a", 00:12:22.485 "is_configured": true, 00:12:22.485 "data_offset": 2048, 00:12:22.485 "data_size": 63488 00:12:22.485 }, 00:12:22.485 { 00:12:22.485 "name": "BaseBdev4", 00:12:22.485 "uuid": "6785eaa2-169a-416c-ab03-e56c7035e2a6", 00:12:22.485 "is_configured": true, 00:12:22.485 "data_offset": 2048, 00:12:22.485 "data_size": 63488 00:12:22.485 } 00:12:22.485 ] 00:12:22.485 } 00:12:22.485 } 00:12:22.485 }' 00:12:22.485 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.485 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:22.485 BaseBdev2 00:12:22.485 BaseBdev3 00:12:22.485 BaseBdev4' 00:12:22.485 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.486 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.486 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.486 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.486 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:22.486 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.744 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 [2024-11-05 03:23:36.310092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.745 [2024-11-05 03:23:36.310149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.745 [2024-11-05 03:23:36.310249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.745 [2024-11-05 03:23:36.310364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.745 [2024-11-05 03:23:36.310398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71829 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 71829 ']' 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 71829 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71829 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71829' 00:12:22.745 killing process with pid 71829 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 71829 00:12:22.745 03:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 71829 00:12:22.745 [2024-11-05 03:23:36.343750] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.313 [2024-11-05 03:23:36.695423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.352 03:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:24.352 00:12:24.352 real 0m12.748s 00:12:24.352 user 0m21.037s 00:12:24.352 sys 0m1.867s 00:12:24.352 03:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.352 03:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.352 ************************************ 00:12:24.352 END TEST raid_state_function_test_sb 00:12:24.352 ************************************ 00:12:24.352 03:23:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:24.352 03:23:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:24.352 03:23:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.352 03:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.352 ************************************ 00:12:24.352 START TEST raid_superblock_test 00:12:24.352 ************************************ 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72509 00:12:24.352 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:24.353 03:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72509 00:12:24.353 03:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72509 ']' 00:12:24.353 03:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.353 03:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:24.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.353 03:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.353 03:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:24.353 03:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.353 [2024-11-05 03:23:37.872210] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:24.353 [2024-11-05 03:23:37.872434] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72509 ] 00:12:24.612 [2024-11-05 03:23:38.065850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.612 [2024-11-05 03:23:38.227603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.871 [2024-11-05 03:23:38.444104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.871 [2024-11-05 03:23:38.444181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.439 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:25.439 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:25.439 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:25.439 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.439 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 malloc1 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 [2024-11-05 03:23:38.949066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.440 [2024-11-05 03:23:38.949154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.440 [2024-11-05 03:23:38.949198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:25.440 [2024-11-05 03:23:38.949218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.440 [2024-11-05 03:23:38.952173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.440 [2024-11-05 03:23:38.952219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.440 pt1 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 malloc2 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.440 03:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 [2024-11-05 03:23:39.006226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.440 [2024-11-05 03:23:39.006326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.440 [2024-11-05 03:23:39.006359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:25.440 [2024-11-05 03:23:39.006374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.440 [2024-11-05 03:23:39.009205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.440 [2024-11-05 03:23:39.009265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.440 pt2 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 malloc3 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 [2024-11-05 03:23:39.069480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.440 [2024-11-05 03:23:39.069556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.440 [2024-11-05 03:23:39.069608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:25.440 [2024-11-05 03:23:39.069633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.440 [2024-11-05 03:23:39.072294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.440 [2024-11-05 03:23:39.072378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.440 pt3 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:25.440 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.699 malloc4 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.699 [2024-11-05 03:23:39.124102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:25.699 [2024-11-05 03:23:39.124207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.699 [2024-11-05 03:23:39.124243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:25.699 [2024-11-05 03:23:39.124257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.699 [2024-11-05 03:23:39.127186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.699 [2024-11-05 03:23:39.127245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:25.699 pt4 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.699 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.699 [2024-11-05 03:23:39.136200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.699 [2024-11-05 03:23:39.138641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.699 [2024-11-05 03:23:39.138762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.699 [2024-11-05 03:23:39.138849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:25.699 [2024-11-05 03:23:39.139135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:25.699 [2024-11-05 03:23:39.139163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:25.699 [2024-11-05 03:23:39.139546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.699 [2024-11-05 03:23:39.139806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:25.699 [2024-11-05 03:23:39.139836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:25.700 [2024-11-05 03:23:39.140076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.700 "name": "raid_bdev1", 00:12:25.700 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:25.700 "strip_size_kb": 64, 00:12:25.700 "state": "online", 00:12:25.700 "raid_level": "concat", 00:12:25.700 "superblock": true, 00:12:25.700 "num_base_bdevs": 4, 00:12:25.700 "num_base_bdevs_discovered": 4, 00:12:25.700 "num_base_bdevs_operational": 4, 00:12:25.700 "base_bdevs_list": [ 00:12:25.700 { 00:12:25.700 "name": "pt1", 00:12:25.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.700 "is_configured": true, 00:12:25.700 "data_offset": 2048, 00:12:25.700 "data_size": 63488 00:12:25.700 }, 00:12:25.700 { 00:12:25.700 "name": "pt2", 00:12:25.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.700 "is_configured": true, 00:12:25.700 "data_offset": 2048, 00:12:25.700 "data_size": 63488 00:12:25.700 }, 00:12:25.700 { 00:12:25.700 "name": "pt3", 00:12:25.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.700 "is_configured": true, 00:12:25.700 "data_offset": 2048, 00:12:25.700 "data_size": 63488 00:12:25.700 }, 00:12:25.700 { 00:12:25.700 "name": "pt4", 00:12:25.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.700 "is_configured": true, 00:12:25.700 "data_offset": 2048, 00:12:25.700 "data_size": 63488 00:12:25.700 } 00:12:25.700 ] 00:12:25.700 }' 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.700 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.267 [2024-11-05 03:23:39.676887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.267 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.267 "name": "raid_bdev1", 00:12:26.267 "aliases": [ 00:12:26.267 "36578d8a-ca45-41f6-bcdd-cc26e9700df1" 00:12:26.267 ], 00:12:26.267 "product_name": "Raid Volume", 00:12:26.267 "block_size": 512, 00:12:26.267 "num_blocks": 253952, 00:12:26.267 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:26.267 "assigned_rate_limits": { 00:12:26.267 "rw_ios_per_sec": 0, 00:12:26.267 "rw_mbytes_per_sec": 0, 00:12:26.267 "r_mbytes_per_sec": 0, 00:12:26.267 "w_mbytes_per_sec": 0 00:12:26.267 }, 00:12:26.267 "claimed": false, 00:12:26.267 "zoned": false, 00:12:26.267 "supported_io_types": { 00:12:26.267 "read": true, 00:12:26.267 "write": true, 00:12:26.267 "unmap": true, 00:12:26.267 "flush": true, 00:12:26.267 "reset": true, 00:12:26.267 "nvme_admin": false, 00:12:26.267 "nvme_io": false, 00:12:26.267 "nvme_io_md": false, 00:12:26.267 "write_zeroes": true, 00:12:26.267 "zcopy": false, 00:12:26.267 "get_zone_info": false, 00:12:26.267 "zone_management": false, 00:12:26.267 "zone_append": false, 00:12:26.267 "compare": false, 00:12:26.267 "compare_and_write": false, 00:12:26.267 "abort": false, 00:12:26.267 "seek_hole": false, 00:12:26.267 "seek_data": false, 00:12:26.267 "copy": false, 00:12:26.267 "nvme_iov_md": false 00:12:26.267 }, 00:12:26.267 "memory_domains": [ 00:12:26.267 { 00:12:26.267 "dma_device_id": "system", 00:12:26.267 "dma_device_type": 1 00:12:26.267 }, 00:12:26.267 { 00:12:26.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.267 "dma_device_type": 2 00:12:26.267 }, 00:12:26.267 { 00:12:26.267 "dma_device_id": "system", 00:12:26.267 "dma_device_type": 1 00:12:26.267 }, 00:12:26.267 { 00:12:26.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.267 "dma_device_type": 2 00:12:26.267 }, 00:12:26.267 { 00:12:26.267 "dma_device_id": "system", 00:12:26.267 "dma_device_type": 1 00:12:26.267 }, 00:12:26.267 { 00:12:26.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.267 "dma_device_type": 2 00:12:26.267 }, 00:12:26.267 { 00:12:26.267 "dma_device_id": "system", 00:12:26.267 "dma_device_type": 1 00:12:26.267 }, 00:12:26.267 { 00:12:26.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.267 "dma_device_type": 2 00:12:26.267 } 00:12:26.267 ], 00:12:26.267 "driver_specific": { 00:12:26.267 "raid": { 00:12:26.267 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:26.267 "strip_size_kb": 64, 00:12:26.267 "state": "online", 00:12:26.267 "raid_level": "concat", 00:12:26.267 "superblock": true, 00:12:26.267 "num_base_bdevs": 4, 00:12:26.267 "num_base_bdevs_discovered": 4, 00:12:26.268 "num_base_bdevs_operational": 4, 00:12:26.268 "base_bdevs_list": [ 00:12:26.268 { 00:12:26.268 "name": "pt1", 00:12:26.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.268 "is_configured": true, 00:12:26.268 "data_offset": 2048, 00:12:26.268 "data_size": 63488 00:12:26.268 }, 00:12:26.268 { 00:12:26.268 "name": "pt2", 00:12:26.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.268 "is_configured": true, 00:12:26.268 "data_offset": 2048, 00:12:26.268 "data_size": 63488 00:12:26.268 }, 00:12:26.268 { 00:12:26.268 "name": "pt3", 00:12:26.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.268 "is_configured": true, 00:12:26.268 "data_offset": 2048, 00:12:26.268 "data_size": 63488 00:12:26.268 }, 00:12:26.268 { 00:12:26.268 "name": "pt4", 00:12:26.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.268 "is_configured": true, 00:12:26.268 "data_offset": 2048, 00:12:26.268 "data_size": 63488 00:12:26.268 } 00:12:26.268 ] 00:12:26.268 } 00:12:26.268 } 00:12:26.268 }' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:26.268 pt2 00:12:26.268 pt3 00:12:26.268 pt4' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.268 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.527 03:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.527 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.528 [2024-11-05 03:23:40.064880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=36578d8a-ca45-41f6-bcdd-cc26e9700df1 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 36578d8a-ca45-41f6-bcdd-cc26e9700df1 ']' 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.528 [2024-11-05 03:23:40.112533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.528 [2024-11-05 03:23:40.112568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.528 [2024-11-05 03:23:40.112666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.528 [2024-11-05 03:23:40.112776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.528 [2024-11-05 03:23:40.112813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.528 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.787 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 [2024-11-05 03:23:40.268604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:26.787 [2024-11-05 03:23:40.271132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:26.788 [2024-11-05 03:23:40.271408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:26.788 [2024-11-05 03:23:40.271477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:26.788 [2024-11-05 03:23:40.271549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:26.788 [2024-11-05 03:23:40.271622] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:26.788 [2024-11-05 03:23:40.271655] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:26.788 [2024-11-05 03:23:40.271685] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:26.788 [2024-11-05 03:23:40.271729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.788 [2024-11-05 03:23:40.271744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:26.788 request: 00:12:26.788 { 00:12:26.788 "name": "raid_bdev1", 00:12:26.788 "raid_level": "concat", 00:12:26.788 "base_bdevs": [ 00:12:26.788 "malloc1", 00:12:26.788 "malloc2", 00:12:26.788 "malloc3", 00:12:26.788 "malloc4" 00:12:26.788 ], 00:12:26.788 "strip_size_kb": 64, 00:12:26.788 "superblock": false, 00:12:26.788 "method": "bdev_raid_create", 00:12:26.788 "req_id": 1 00:12:26.788 } 00:12:26.788 Got JSON-RPC error response 00:12:26.788 response: 00:12:26.788 { 00:12:26.788 "code": -17, 00:12:26.788 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:26.788 } 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.788 [2024-11-05 03:23:40.336612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.788 [2024-11-05 03:23:40.336894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.788 [2024-11-05 03:23:40.336962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.788 [2024-11-05 03:23:40.337206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.788 [2024-11-05 03:23:40.340268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.788 [2024-11-05 03:23:40.340515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.788 [2024-11-05 03:23:40.340634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:26.788 [2024-11-05 03:23:40.340715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.788 pt1 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.788 "name": "raid_bdev1", 00:12:26.788 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:26.788 "strip_size_kb": 64, 00:12:26.788 "state": "configuring", 00:12:26.788 "raid_level": "concat", 00:12:26.788 "superblock": true, 00:12:26.788 "num_base_bdevs": 4, 00:12:26.788 "num_base_bdevs_discovered": 1, 00:12:26.788 "num_base_bdevs_operational": 4, 00:12:26.788 "base_bdevs_list": [ 00:12:26.788 { 00:12:26.788 "name": "pt1", 00:12:26.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.788 "is_configured": true, 00:12:26.788 "data_offset": 2048, 00:12:26.788 "data_size": 63488 00:12:26.788 }, 00:12:26.788 { 00:12:26.788 "name": null, 00:12:26.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.788 "is_configured": false, 00:12:26.788 "data_offset": 2048, 00:12:26.788 "data_size": 63488 00:12:26.788 }, 00:12:26.788 { 00:12:26.788 "name": null, 00:12:26.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.788 "is_configured": false, 00:12:26.788 "data_offset": 2048, 00:12:26.788 "data_size": 63488 00:12:26.788 }, 00:12:26.788 { 00:12:26.788 "name": null, 00:12:26.788 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.788 "is_configured": false, 00:12:26.788 "data_offset": 2048, 00:12:26.788 "data_size": 63488 00:12:26.788 } 00:12:26.788 ] 00:12:26.788 }' 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.788 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.356 [2024-11-05 03:23:40.884913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.356 [2024-11-05 03:23:40.885176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.356 [2024-11-05 03:23:40.885214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:27.356 [2024-11-05 03:23:40.885233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.356 [2024-11-05 03:23:40.885833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.356 [2024-11-05 03:23:40.885899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.356 [2024-11-05 03:23:40.886038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.356 [2024-11-05 03:23:40.886073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.356 pt2 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.356 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.356 [2024-11-05 03:23:40.892949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.357 "name": "raid_bdev1", 00:12:27.357 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:27.357 "strip_size_kb": 64, 00:12:27.357 "state": "configuring", 00:12:27.357 "raid_level": "concat", 00:12:27.357 "superblock": true, 00:12:27.357 "num_base_bdevs": 4, 00:12:27.357 "num_base_bdevs_discovered": 1, 00:12:27.357 "num_base_bdevs_operational": 4, 00:12:27.357 "base_bdevs_list": [ 00:12:27.357 { 00:12:27.357 "name": "pt1", 00:12:27.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.357 "is_configured": true, 00:12:27.357 "data_offset": 2048, 00:12:27.357 "data_size": 63488 00:12:27.357 }, 00:12:27.357 { 00:12:27.357 "name": null, 00:12:27.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.357 "is_configured": false, 00:12:27.357 "data_offset": 0, 00:12:27.357 "data_size": 63488 00:12:27.357 }, 00:12:27.357 { 00:12:27.357 "name": null, 00:12:27.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.357 "is_configured": false, 00:12:27.357 "data_offset": 2048, 00:12:27.357 "data_size": 63488 00:12:27.357 }, 00:12:27.357 { 00:12:27.357 "name": null, 00:12:27.357 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.357 "is_configured": false, 00:12:27.357 "data_offset": 2048, 00:12:27.357 "data_size": 63488 00:12:27.357 } 00:12:27.357 ] 00:12:27.357 }' 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.357 03:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.924 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.925 [2024-11-05 03:23:41.425125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.925 [2024-11-05 03:23:41.425422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.925 [2024-11-05 03:23:41.425468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:27.925 [2024-11-05 03:23:41.425485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.925 [2024-11-05 03:23:41.426064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.925 [2024-11-05 03:23:41.426087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.925 [2024-11-05 03:23:41.426200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.925 [2024-11-05 03:23:41.426229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.925 pt2 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.925 [2024-11-05 03:23:41.437091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:27.925 [2024-11-05 03:23:41.437160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.925 [2024-11-05 03:23:41.437190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:27.925 [2024-11-05 03:23:41.437205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.925 [2024-11-05 03:23:41.437795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.925 [2024-11-05 03:23:41.437845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:27.925 [2024-11-05 03:23:41.437967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:27.925 [2024-11-05 03:23:41.438011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.925 pt3 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.925 [2024-11-05 03:23:41.445166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:27.925 [2024-11-05 03:23:41.445258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.925 [2024-11-05 03:23:41.445294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:27.925 [2024-11-05 03:23:41.445327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.925 [2024-11-05 03:23:41.445875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.925 [2024-11-05 03:23:41.445933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:27.925 [2024-11-05 03:23:41.446021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:27.925 [2024-11-05 03:23:41.446049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:27.925 [2024-11-05 03:23:41.446233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.925 [2024-11-05 03:23:41.446248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:27.925 [2024-11-05 03:23:41.446617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:27.925 [2024-11-05 03:23:41.446980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.925 [2024-11-05 03:23:41.447008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.925 [2024-11-05 03:23:41.447169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.925 pt4 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.925 "name": "raid_bdev1", 00:12:27.925 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:27.925 "strip_size_kb": 64, 00:12:27.925 "state": "online", 00:12:27.925 "raid_level": "concat", 00:12:27.925 "superblock": true, 00:12:27.925 "num_base_bdevs": 4, 00:12:27.925 "num_base_bdevs_discovered": 4, 00:12:27.925 "num_base_bdevs_operational": 4, 00:12:27.925 "base_bdevs_list": [ 00:12:27.925 { 00:12:27.925 "name": "pt1", 00:12:27.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.925 "is_configured": true, 00:12:27.925 "data_offset": 2048, 00:12:27.925 "data_size": 63488 00:12:27.925 }, 00:12:27.925 { 00:12:27.925 "name": "pt2", 00:12:27.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.925 "is_configured": true, 00:12:27.925 "data_offset": 2048, 00:12:27.925 "data_size": 63488 00:12:27.925 }, 00:12:27.925 { 00:12:27.925 "name": "pt3", 00:12:27.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.925 "is_configured": true, 00:12:27.925 "data_offset": 2048, 00:12:27.925 "data_size": 63488 00:12:27.925 }, 00:12:27.925 { 00:12:27.925 "name": "pt4", 00:12:27.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.925 "is_configured": true, 00:12:27.925 "data_offset": 2048, 00:12:27.925 "data_size": 63488 00:12:27.925 } 00:12:27.925 ] 00:12:27.925 }' 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.925 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.548 03:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.548 [2024-11-05 03:23:42.001752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.548 "name": "raid_bdev1", 00:12:28.548 "aliases": [ 00:12:28.548 "36578d8a-ca45-41f6-bcdd-cc26e9700df1" 00:12:28.548 ], 00:12:28.548 "product_name": "Raid Volume", 00:12:28.548 "block_size": 512, 00:12:28.548 "num_blocks": 253952, 00:12:28.548 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:28.548 "assigned_rate_limits": { 00:12:28.548 "rw_ios_per_sec": 0, 00:12:28.548 "rw_mbytes_per_sec": 0, 00:12:28.548 "r_mbytes_per_sec": 0, 00:12:28.548 "w_mbytes_per_sec": 0 00:12:28.548 }, 00:12:28.548 "claimed": false, 00:12:28.548 "zoned": false, 00:12:28.548 "supported_io_types": { 00:12:28.548 "read": true, 00:12:28.548 "write": true, 00:12:28.548 "unmap": true, 00:12:28.548 "flush": true, 00:12:28.548 "reset": true, 00:12:28.548 "nvme_admin": false, 00:12:28.548 "nvme_io": false, 00:12:28.548 "nvme_io_md": false, 00:12:28.548 "write_zeroes": true, 00:12:28.548 "zcopy": false, 00:12:28.548 "get_zone_info": false, 00:12:28.548 "zone_management": false, 00:12:28.548 "zone_append": false, 00:12:28.548 "compare": false, 00:12:28.548 "compare_and_write": false, 00:12:28.548 "abort": false, 00:12:28.548 "seek_hole": false, 00:12:28.548 "seek_data": false, 00:12:28.548 "copy": false, 00:12:28.548 "nvme_iov_md": false 00:12:28.548 }, 00:12:28.548 "memory_domains": [ 00:12:28.548 { 00:12:28.548 "dma_device_id": "system", 00:12:28.548 "dma_device_type": 1 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.548 "dma_device_type": 2 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "dma_device_id": "system", 00:12:28.548 "dma_device_type": 1 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.548 "dma_device_type": 2 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "dma_device_id": "system", 00:12:28.548 "dma_device_type": 1 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.548 "dma_device_type": 2 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "dma_device_id": "system", 00:12:28.548 "dma_device_type": 1 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.548 "dma_device_type": 2 00:12:28.548 } 00:12:28.548 ], 00:12:28.548 "driver_specific": { 00:12:28.548 "raid": { 00:12:28.548 "uuid": "36578d8a-ca45-41f6-bcdd-cc26e9700df1", 00:12:28.548 "strip_size_kb": 64, 00:12:28.548 "state": "online", 00:12:28.548 "raid_level": "concat", 00:12:28.548 "superblock": true, 00:12:28.548 "num_base_bdevs": 4, 00:12:28.548 "num_base_bdevs_discovered": 4, 00:12:28.548 "num_base_bdevs_operational": 4, 00:12:28.548 "base_bdevs_list": [ 00:12:28.548 { 00:12:28.548 "name": "pt1", 00:12:28.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.548 "is_configured": true, 00:12:28.548 "data_offset": 2048, 00:12:28.548 "data_size": 63488 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "name": "pt2", 00:12:28.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.548 "is_configured": true, 00:12:28.548 "data_offset": 2048, 00:12:28.548 "data_size": 63488 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "name": "pt3", 00:12:28.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.548 "is_configured": true, 00:12:28.548 "data_offset": 2048, 00:12:28.548 "data_size": 63488 00:12:28.548 }, 00:12:28.548 { 00:12:28.548 "name": "pt4", 00:12:28.548 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.548 "is_configured": true, 00:12:28.548 "data_offset": 2048, 00:12:28.548 "data_size": 63488 00:12:28.548 } 00:12:28.548 ] 00:12:28.548 } 00:12:28.548 } 00:12:28.548 }' 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:28.548 pt2 00:12:28.548 pt3 00:12:28.548 pt4' 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.548 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.807 [2024-11-05 03:23:42.377837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 36578d8a-ca45-41f6-bcdd-cc26e9700df1 '!=' 36578d8a-ca45-41f6-bcdd-cc26e9700df1 ']' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72509 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72509 ']' 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72509 00:12:28.807 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:28.808 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:28.808 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72509 00:12:29.067 killing process with pid 72509 00:12:29.067 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:29.067 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:29.067 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72509' 00:12:29.067 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72509 00:12:29.067 [2024-11-05 03:23:42.460886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.067 03:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72509 00:12:29.067 [2024-11-05 03:23:42.461036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.067 [2024-11-05 03:23:42.461199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.067 [2024-11-05 03:23:42.461220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:29.326 [2024-11-05 03:23:42.815060] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.265 03:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:30.265 00:12:30.265 real 0m6.086s 00:12:30.265 user 0m9.212s 00:12:30.265 sys 0m0.913s 00:12:30.265 03:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:30.265 03:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.265 ************************************ 00:12:30.265 END TEST raid_superblock_test 00:12:30.265 ************************************ 00:12:30.265 03:23:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:30.265 03:23:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:30.265 03:23:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:30.265 03:23:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.524 ************************************ 00:12:30.524 START TEST raid_read_error_test 00:12:30.524 ************************************ 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UnYiUu3X3a 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72775 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72775 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 72775 ']' 00:12:30.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:30.524 03:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.525 03:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:30.525 03:23:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.525 [2024-11-05 03:23:44.013672] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:30.525 [2024-11-05 03:23:44.013882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72775 ] 00:12:30.783 [2024-11-05 03:23:44.196147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.783 [2024-11-05 03:23:44.328115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.042 [2024-11-05 03:23:44.538299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.042 [2024-11-05 03:23:44.538345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.611 03:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:31.611 03:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:31.611 03:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.611 03:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:31.611 03:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 BaseBdev1_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 true 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 [2024-11-05 03:23:45.050671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:31.611 [2024-11-05 03:23:45.050741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.611 [2024-11-05 03:23:45.050779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:31.611 [2024-11-05 03:23:45.050797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.611 [2024-11-05 03:23:45.053962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.611 [2024-11-05 03:23:45.054014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:31.611 BaseBdev1 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 BaseBdev2_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 true 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 [2024-11-05 03:23:45.112595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:31.611 [2024-11-05 03:23:45.112663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.611 [2024-11-05 03:23:45.112689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:31.611 [2024-11-05 03:23:45.112706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.611 [2024-11-05 03:23:45.115927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.611 [2024-11-05 03:23:45.115993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:31.611 BaseBdev2 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 BaseBdev3_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 true 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 [2024-11-05 03:23:45.191910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:31.611 [2024-11-05 03:23:45.191980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.611 [2024-11-05 03:23:45.192009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:31.611 [2024-11-05 03:23:45.192027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.611 [2024-11-05 03:23:45.194976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.611 [2024-11-05 03:23:45.195045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:31.611 BaseBdev3 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 BaseBdev4_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.611 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.871 true 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.871 [2024-11-05 03:23:45.257051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:31.871 [2024-11-05 03:23:45.257132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.871 [2024-11-05 03:23:45.257160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:31.871 [2024-11-05 03:23:45.257177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.871 [2024-11-05 03:23:45.260166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.871 [2024-11-05 03:23:45.260220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:31.871 BaseBdev4 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.871 [2024-11-05 03:23:45.269148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.871 [2024-11-05 03:23:45.271969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.871 [2024-11-05 03:23:45.272072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.871 [2024-11-05 03:23:45.272166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.871 [2024-11-05 03:23:45.272480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:31.871 [2024-11-05 03:23:45.272502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:31.871 [2024-11-05 03:23:45.272943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.871 [2024-11-05 03:23:45.273400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:31.871 [2024-11-05 03:23:45.273430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:31.871 [2024-11-05 03:23:45.273681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.871 "name": "raid_bdev1", 00:12:31.871 "uuid": "f763950b-d062-4c2d-9a76-3223ef46c35e", 00:12:31.871 "strip_size_kb": 64, 00:12:31.871 "state": "online", 00:12:31.871 "raid_level": "concat", 00:12:31.871 "superblock": true, 00:12:31.871 "num_base_bdevs": 4, 00:12:31.871 "num_base_bdevs_discovered": 4, 00:12:31.871 "num_base_bdevs_operational": 4, 00:12:31.871 "base_bdevs_list": [ 00:12:31.871 { 00:12:31.871 "name": "BaseBdev1", 00:12:31.871 "uuid": "571271d3-19e3-5e01-886b-6ff027b83a42", 00:12:31.871 "is_configured": true, 00:12:31.871 "data_offset": 2048, 00:12:31.871 "data_size": 63488 00:12:31.871 }, 00:12:31.871 { 00:12:31.871 "name": "BaseBdev2", 00:12:31.871 "uuid": "50d6d386-91c6-5275-88a2-883d8bc0561b", 00:12:31.871 "is_configured": true, 00:12:31.871 "data_offset": 2048, 00:12:31.871 "data_size": 63488 00:12:31.871 }, 00:12:31.871 { 00:12:31.871 "name": "BaseBdev3", 00:12:31.871 "uuid": "847bb861-a79c-56aa-80d7-3190515ddbc2", 00:12:31.871 "is_configured": true, 00:12:31.871 "data_offset": 2048, 00:12:31.871 "data_size": 63488 00:12:31.871 }, 00:12:31.871 { 00:12:31.871 "name": "BaseBdev4", 00:12:31.871 "uuid": "29570cda-f236-57e0-88f3-84a6ef4f96ca", 00:12:31.871 "is_configured": true, 00:12:31.871 "data_offset": 2048, 00:12:31.871 "data_size": 63488 00:12:31.871 } 00:12:31.871 ] 00:12:31.871 }' 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.871 03:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.459 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:32.459 03:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:32.459 [2024-11-05 03:23:45.979418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.421 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.421 "name": "raid_bdev1", 00:12:33.421 "uuid": "f763950b-d062-4c2d-9a76-3223ef46c35e", 00:12:33.422 "strip_size_kb": 64, 00:12:33.422 "state": "online", 00:12:33.422 "raid_level": "concat", 00:12:33.422 "superblock": true, 00:12:33.422 "num_base_bdevs": 4, 00:12:33.422 "num_base_bdevs_discovered": 4, 00:12:33.422 "num_base_bdevs_operational": 4, 00:12:33.422 "base_bdevs_list": [ 00:12:33.422 { 00:12:33.422 "name": "BaseBdev1", 00:12:33.422 "uuid": "571271d3-19e3-5e01-886b-6ff027b83a42", 00:12:33.422 "is_configured": true, 00:12:33.422 "data_offset": 2048, 00:12:33.422 "data_size": 63488 00:12:33.422 }, 00:12:33.422 { 00:12:33.422 "name": "BaseBdev2", 00:12:33.422 "uuid": "50d6d386-91c6-5275-88a2-883d8bc0561b", 00:12:33.422 "is_configured": true, 00:12:33.422 "data_offset": 2048, 00:12:33.422 "data_size": 63488 00:12:33.422 }, 00:12:33.422 { 00:12:33.422 "name": "BaseBdev3", 00:12:33.422 "uuid": "847bb861-a79c-56aa-80d7-3190515ddbc2", 00:12:33.422 "is_configured": true, 00:12:33.422 "data_offset": 2048, 00:12:33.422 "data_size": 63488 00:12:33.422 }, 00:12:33.422 { 00:12:33.422 "name": "BaseBdev4", 00:12:33.422 "uuid": "29570cda-f236-57e0-88f3-84a6ef4f96ca", 00:12:33.422 "is_configured": true, 00:12:33.422 "data_offset": 2048, 00:12:33.422 "data_size": 63488 00:12:33.422 } 00:12:33.422 ] 00:12:33.422 }' 00:12:33.422 03:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.422 03:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.002 [2024-11-05 03:23:47.370623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.002 [2024-11-05 03:23:47.370900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.002 [2024-11-05 03:23:47.374442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.002 [2024-11-05 03:23:47.374649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.002 [2024-11-05 03:23:47.374723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.002 [2024-11-05 03:23:47.374746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:34.002 { 00:12:34.002 "results": [ 00:12:34.002 { 00:12:34.002 "job": "raid_bdev1", 00:12:34.002 "core_mask": "0x1", 00:12:34.002 "workload": "randrw", 00:12:34.002 "percentage": 50, 00:12:34.002 "status": "finished", 00:12:34.002 "queue_depth": 1, 00:12:34.002 "io_size": 131072, 00:12:34.002 "runtime": 1.389039, 00:12:34.002 "iops": 10528.142118399843, 00:12:34.002 "mibps": 1316.0177647999803, 00:12:34.002 "io_failed": 1, 00:12:34.002 "io_timeout": 0, 00:12:34.002 "avg_latency_us": 132.6879786169386, 00:12:34.002 "min_latency_us": 37.236363636363635, 00:12:34.002 "max_latency_us": 1891.6072727272726 00:12:34.002 } 00:12:34.002 ], 00:12:34.002 "core_count": 1 00:12:34.002 } 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72775 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 72775 ']' 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 72775 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72775 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:34.002 killing process with pid 72775 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72775' 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 72775 00:12:34.002 [2024-11-05 03:23:47.412813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.002 03:23:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 72775 00:12:34.261 [2024-11-05 03:23:47.707411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UnYiUu3X3a 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:35.198 00:12:35.198 real 0m4.923s 00:12:35.198 user 0m6.059s 00:12:35.198 sys 0m0.635s 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:35.198 03:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.198 ************************************ 00:12:35.198 END TEST raid_read_error_test 00:12:35.198 ************************************ 00:12:35.458 03:23:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:35.458 03:23:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:35.458 03:23:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:35.458 03:23:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.458 ************************************ 00:12:35.458 START TEST raid_write_error_test 00:12:35.458 ************************************ 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j8ECWOpN3t 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72921 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72921 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 72921 ']' 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:35.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:35.458 03:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.458 [2024-11-05 03:23:49.007156] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:35.458 [2024-11-05 03:23:49.007363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72921 ] 00:12:35.718 [2024-11-05 03:23:49.198390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.977 [2024-11-05 03:23:49.358357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.977 [2024-11-05 03:23:49.587154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.977 [2024-11-05 03:23:49.587232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.562 BaseBdev1_malloc 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.562 true 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.562 [2024-11-05 03:23:50.094053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:36.562 [2024-11-05 03:23:50.094125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.562 [2024-11-05 03:23:50.094155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:36.562 [2024-11-05 03:23:50.094173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.562 [2024-11-05 03:23:50.097006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.562 [2024-11-05 03:23:50.097056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.562 BaseBdev1 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.562 BaseBdev2_malloc 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.562 true 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.562 [2024-11-05 03:23:50.158808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:36.562 [2024-11-05 03:23:50.158879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.562 [2024-11-05 03:23:50.158904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:36.562 [2024-11-05 03:23:50.158921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.562 [2024-11-05 03:23:50.161931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.562 [2024-11-05 03:23:50.162147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.562 BaseBdev2 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.562 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.821 BaseBdev3_malloc 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 true 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 [2024-11-05 03:23:50.227079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:36.822 [2024-11-05 03:23:50.227290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.822 [2024-11-05 03:23:50.227377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:36.822 [2024-11-05 03:23:50.227511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.822 [2024-11-05 03:23:50.230385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.822 [2024-11-05 03:23:50.230548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:36.822 BaseBdev3 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 BaseBdev4_malloc 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 true 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 [2024-11-05 03:23:50.296383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:36.822 [2024-11-05 03:23:50.296453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.822 [2024-11-05 03:23:50.296487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.822 [2024-11-05 03:23:50.296506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.822 [2024-11-05 03:23:50.299360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.822 [2024-11-05 03:23:50.299444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:36.822 BaseBdev4 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 [2024-11-05 03:23:50.308533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.822 [2024-11-05 03:23:50.311092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.822 [2024-11-05 03:23:50.311205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.822 [2024-11-05 03:23:50.311308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.822 [2024-11-05 03:23:50.311611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:36.822 [2024-11-05 03:23:50.311663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:36.822 [2024-11-05 03:23:50.312003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:36.822 [2024-11-05 03:23:50.312206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:36.822 [2024-11-05 03:23:50.312223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:36.822 [2024-11-05 03:23:50.312477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.822 "name": "raid_bdev1", 00:12:36.822 "uuid": "850acf0c-18b3-4080-b361-802f1a20e20d", 00:12:36.822 "strip_size_kb": 64, 00:12:36.822 "state": "online", 00:12:36.822 "raid_level": "concat", 00:12:36.822 "superblock": true, 00:12:36.822 "num_base_bdevs": 4, 00:12:36.822 "num_base_bdevs_discovered": 4, 00:12:36.822 "num_base_bdevs_operational": 4, 00:12:36.822 "base_bdevs_list": [ 00:12:36.822 { 00:12:36.822 "name": "BaseBdev1", 00:12:36.822 "uuid": "d9c44717-89eb-52c6-9813-1aa5b14ef041", 00:12:36.822 "is_configured": true, 00:12:36.822 "data_offset": 2048, 00:12:36.822 "data_size": 63488 00:12:36.822 }, 00:12:36.822 { 00:12:36.822 "name": "BaseBdev2", 00:12:36.822 "uuid": "66d0fe89-0cda-5c88-b84d-cba0a016e84f", 00:12:36.822 "is_configured": true, 00:12:36.822 "data_offset": 2048, 00:12:36.822 "data_size": 63488 00:12:36.822 }, 00:12:36.822 { 00:12:36.822 "name": "BaseBdev3", 00:12:36.822 "uuid": "0283dabe-1adf-5d09-9fc9-559fb59cc8e1", 00:12:36.822 "is_configured": true, 00:12:36.822 "data_offset": 2048, 00:12:36.822 "data_size": 63488 00:12:36.822 }, 00:12:36.822 { 00:12:36.822 "name": "BaseBdev4", 00:12:36.822 "uuid": "8631f407-62c3-5799-949b-4dea8f711fbf", 00:12:36.822 "is_configured": true, 00:12:36.822 "data_offset": 2048, 00:12:36.822 "data_size": 63488 00:12:36.822 } 00:12:36.822 ] 00:12:36.822 }' 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.822 03:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.390 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:37.390 03:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:37.391 [2024-11-05 03:23:50.970175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.326 "name": "raid_bdev1", 00:12:38.326 "uuid": "850acf0c-18b3-4080-b361-802f1a20e20d", 00:12:38.326 "strip_size_kb": 64, 00:12:38.326 "state": "online", 00:12:38.326 "raid_level": "concat", 00:12:38.326 "superblock": true, 00:12:38.326 "num_base_bdevs": 4, 00:12:38.326 "num_base_bdevs_discovered": 4, 00:12:38.326 "num_base_bdevs_operational": 4, 00:12:38.326 "base_bdevs_list": [ 00:12:38.326 { 00:12:38.326 "name": "BaseBdev1", 00:12:38.326 "uuid": "d9c44717-89eb-52c6-9813-1aa5b14ef041", 00:12:38.326 "is_configured": true, 00:12:38.326 "data_offset": 2048, 00:12:38.326 "data_size": 63488 00:12:38.326 }, 00:12:38.326 { 00:12:38.326 "name": "BaseBdev2", 00:12:38.326 "uuid": "66d0fe89-0cda-5c88-b84d-cba0a016e84f", 00:12:38.326 "is_configured": true, 00:12:38.326 "data_offset": 2048, 00:12:38.326 "data_size": 63488 00:12:38.326 }, 00:12:38.326 { 00:12:38.326 "name": "BaseBdev3", 00:12:38.326 "uuid": "0283dabe-1adf-5d09-9fc9-559fb59cc8e1", 00:12:38.326 "is_configured": true, 00:12:38.326 "data_offset": 2048, 00:12:38.326 "data_size": 63488 00:12:38.326 }, 00:12:38.326 { 00:12:38.326 "name": "BaseBdev4", 00:12:38.326 "uuid": "8631f407-62c3-5799-949b-4dea8f711fbf", 00:12:38.326 "is_configured": true, 00:12:38.326 "data_offset": 2048, 00:12:38.326 "data_size": 63488 00:12:38.326 } 00:12:38.326 ] 00:12:38.326 }' 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.326 03:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.955 03:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.955 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.955 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.955 [2024-11-05 03:23:52.373678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.955 [2024-11-05 03:23:52.373885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.955 [2024-11-05 03:23:52.377334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.955 [2024-11-05 03:23:52.377571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.955 [2024-11-05 03:23:52.377647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.955 [2024-11-05 03:23:52.377671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:38.955 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.955 03:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72921 00:12:38.955 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 72921 ']' 00:12:38.955 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 72921 00:12:38.955 { 00:12:38.955 "results": [ 00:12:38.955 { 00:12:38.955 "job": "raid_bdev1", 00:12:38.955 "core_mask": "0x1", 00:12:38.955 "workload": "randrw", 00:12:38.955 "percentage": 50, 00:12:38.955 "status": "finished", 00:12:38.955 "queue_depth": 1, 00:12:38.955 "io_size": 131072, 00:12:38.955 "runtime": 1.401024, 00:12:38.955 "iops": 11146.13311406514, 00:12:38.955 "mibps": 1393.2666392581425, 00:12:38.955 "io_failed": 1, 00:12:38.955 "io_timeout": 0, 00:12:38.955 "avg_latency_us": 125.32090926554396, 00:12:38.956 "min_latency_us": 36.07272727272727, 00:12:38.956 "max_latency_us": 1832.0290909090909 00:12:38.956 } 00:12:38.956 ], 00:12:38.956 "core_count": 1 00:12:38.956 } 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72921 00:12:38.956 killing process with pid 72921 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72921' 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 72921 00:12:38.956 [2024-11-05 03:23:52.411292] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.956 03:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 72921 00:12:39.213 [2024-11-05 03:23:52.664777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j8ECWOpN3t 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:40.148 00:12:40.148 real 0m4.772s 00:12:40.148 user 0m5.955s 00:12:40.148 sys 0m0.610s 00:12:40.148 ************************************ 00:12:40.148 END TEST raid_write_error_test 00:12:40.148 ************************************ 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.148 03:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.148 03:23:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:40.148 03:23:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:40.148 03:23:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:40.148 03:23:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.148 03:23:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.148 ************************************ 00:12:40.148 START TEST raid_state_function_test 00:12:40.148 ************************************ 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:40.148 Process raid pid: 73070 00:12:40.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73070 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73070' 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73070 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73070 ']' 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:40.148 03:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.406 [2024-11-05 03:23:53.804951] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:40.406 [2024-11-05 03:23:53.805412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.406 [2024-11-05 03:23:53.974141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.665 [2024-11-05 03:23:54.089400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.665 [2024-11-05 03:23:54.272978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.665 [2024-11-05 03:23:54.273016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.232 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.232 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:41.232 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:41.232 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.232 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.232 [2024-11-05 03:23:54.819190] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.232 [2024-11-05 03:23:54.819263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.232 [2024-11-05 03:23:54.819279] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.232 [2024-11-05 03:23:54.819294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.232 [2024-11-05 03:23:54.819302] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.232 [2024-11-05 03:23:54.819368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.232 [2024-11-05 03:23:54.819381] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:41.233 [2024-11-05 03:23:54.819395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.233 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.492 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.492 "name": "Existed_Raid", 00:12:41.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.492 "strip_size_kb": 0, 00:12:41.492 "state": "configuring", 00:12:41.492 "raid_level": "raid1", 00:12:41.492 "superblock": false, 00:12:41.492 "num_base_bdevs": 4, 00:12:41.492 "num_base_bdevs_discovered": 0, 00:12:41.492 "num_base_bdevs_operational": 4, 00:12:41.492 "base_bdevs_list": [ 00:12:41.492 { 00:12:41.492 "name": "BaseBdev1", 00:12:41.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.492 "is_configured": false, 00:12:41.492 "data_offset": 0, 00:12:41.492 "data_size": 0 00:12:41.492 }, 00:12:41.492 { 00:12:41.492 "name": "BaseBdev2", 00:12:41.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.492 "is_configured": false, 00:12:41.492 "data_offset": 0, 00:12:41.492 "data_size": 0 00:12:41.492 }, 00:12:41.492 { 00:12:41.492 "name": "BaseBdev3", 00:12:41.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.492 "is_configured": false, 00:12:41.492 "data_offset": 0, 00:12:41.492 "data_size": 0 00:12:41.492 }, 00:12:41.492 { 00:12:41.492 "name": "BaseBdev4", 00:12:41.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.492 "is_configured": false, 00:12:41.492 "data_offset": 0, 00:12:41.492 "data_size": 0 00:12:41.492 } 00:12:41.492 ] 00:12:41.492 }' 00:12:41.492 03:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.492 03:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.751 [2024-11-05 03:23:55.331263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.751 [2024-11-05 03:23:55.331300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.751 [2024-11-05 03:23:55.343289] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.751 [2024-11-05 03:23:55.343386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.751 [2024-11-05 03:23:55.343401] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.751 [2024-11-05 03:23:55.343417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.751 [2024-11-05 03:23:55.343427] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.751 [2024-11-05 03:23:55.343454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.751 [2024-11-05 03:23:55.343464] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:41.751 [2024-11-05 03:23:55.343477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.751 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.751 [2024-11-05 03:23:55.386605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.751 BaseBdev1 00:12:42.010 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.010 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:42.010 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:42.010 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:42.010 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.011 [ 00:12:42.011 { 00:12:42.011 "name": "BaseBdev1", 00:12:42.011 "aliases": [ 00:12:42.011 "37d2c638-b726-4c14-af07-3f2733eba8a2" 00:12:42.011 ], 00:12:42.011 "product_name": "Malloc disk", 00:12:42.011 "block_size": 512, 00:12:42.011 "num_blocks": 65536, 00:12:42.011 "uuid": "37d2c638-b726-4c14-af07-3f2733eba8a2", 00:12:42.011 "assigned_rate_limits": { 00:12:42.011 "rw_ios_per_sec": 0, 00:12:42.011 "rw_mbytes_per_sec": 0, 00:12:42.011 "r_mbytes_per_sec": 0, 00:12:42.011 "w_mbytes_per_sec": 0 00:12:42.011 }, 00:12:42.011 "claimed": true, 00:12:42.011 "claim_type": "exclusive_write", 00:12:42.011 "zoned": false, 00:12:42.011 "supported_io_types": { 00:12:42.011 "read": true, 00:12:42.011 "write": true, 00:12:42.011 "unmap": true, 00:12:42.011 "flush": true, 00:12:42.011 "reset": true, 00:12:42.011 "nvme_admin": false, 00:12:42.011 "nvme_io": false, 00:12:42.011 "nvme_io_md": false, 00:12:42.011 "write_zeroes": true, 00:12:42.011 "zcopy": true, 00:12:42.011 "get_zone_info": false, 00:12:42.011 "zone_management": false, 00:12:42.011 "zone_append": false, 00:12:42.011 "compare": false, 00:12:42.011 "compare_and_write": false, 00:12:42.011 "abort": true, 00:12:42.011 "seek_hole": false, 00:12:42.011 "seek_data": false, 00:12:42.011 "copy": true, 00:12:42.011 "nvme_iov_md": false 00:12:42.011 }, 00:12:42.011 "memory_domains": [ 00:12:42.011 { 00:12:42.011 "dma_device_id": "system", 00:12:42.011 "dma_device_type": 1 00:12:42.011 }, 00:12:42.011 { 00:12:42.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.011 "dma_device_type": 2 00:12:42.011 } 00:12:42.011 ], 00:12:42.011 "driver_specific": {} 00:12:42.011 } 00:12:42.011 ] 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.011 "name": "Existed_Raid", 00:12:42.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.011 "strip_size_kb": 0, 00:12:42.011 "state": "configuring", 00:12:42.011 "raid_level": "raid1", 00:12:42.011 "superblock": false, 00:12:42.011 "num_base_bdevs": 4, 00:12:42.011 "num_base_bdevs_discovered": 1, 00:12:42.011 "num_base_bdevs_operational": 4, 00:12:42.011 "base_bdevs_list": [ 00:12:42.011 { 00:12:42.011 "name": "BaseBdev1", 00:12:42.011 "uuid": "37d2c638-b726-4c14-af07-3f2733eba8a2", 00:12:42.011 "is_configured": true, 00:12:42.011 "data_offset": 0, 00:12:42.011 "data_size": 65536 00:12:42.011 }, 00:12:42.011 { 00:12:42.011 "name": "BaseBdev2", 00:12:42.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.011 "is_configured": false, 00:12:42.011 "data_offset": 0, 00:12:42.011 "data_size": 0 00:12:42.011 }, 00:12:42.011 { 00:12:42.011 "name": "BaseBdev3", 00:12:42.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.011 "is_configured": false, 00:12:42.011 "data_offset": 0, 00:12:42.011 "data_size": 0 00:12:42.011 }, 00:12:42.011 { 00:12:42.011 "name": "BaseBdev4", 00:12:42.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.011 "is_configured": false, 00:12:42.011 "data_offset": 0, 00:12:42.011 "data_size": 0 00:12:42.011 } 00:12:42.011 ] 00:12:42.011 }' 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.011 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 [2024-11-05 03:23:55.934822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.578 [2024-11-05 03:23:55.934878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 [2024-11-05 03:23:55.942850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.578 [2024-11-05 03:23:55.945065] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.578 [2024-11-05 03:23:55.945128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.578 [2024-11-05 03:23:55.945142] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.578 [2024-11-05 03:23:55.945157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.578 [2024-11-05 03:23:55.945166] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.578 [2024-11-05 03:23:55.945178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.578 "name": "Existed_Raid", 00:12:42.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.578 "strip_size_kb": 0, 00:12:42.578 "state": "configuring", 00:12:42.578 "raid_level": "raid1", 00:12:42.578 "superblock": false, 00:12:42.578 "num_base_bdevs": 4, 00:12:42.578 "num_base_bdevs_discovered": 1, 00:12:42.578 "num_base_bdevs_operational": 4, 00:12:42.578 "base_bdevs_list": [ 00:12:42.578 { 00:12:42.578 "name": "BaseBdev1", 00:12:42.578 "uuid": "37d2c638-b726-4c14-af07-3f2733eba8a2", 00:12:42.578 "is_configured": true, 00:12:42.578 "data_offset": 0, 00:12:42.578 "data_size": 65536 00:12:42.578 }, 00:12:42.578 { 00:12:42.578 "name": "BaseBdev2", 00:12:42.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.578 "is_configured": false, 00:12:42.578 "data_offset": 0, 00:12:42.578 "data_size": 0 00:12:42.578 }, 00:12:42.578 { 00:12:42.578 "name": "BaseBdev3", 00:12:42.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.578 "is_configured": false, 00:12:42.578 "data_offset": 0, 00:12:42.578 "data_size": 0 00:12:42.578 }, 00:12:42.578 { 00:12:42.578 "name": "BaseBdev4", 00:12:42.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.578 "is_configured": false, 00:12:42.578 "data_offset": 0, 00:12:42.578 "data_size": 0 00:12:42.578 } 00:12:42.578 ] 00:12:42.578 }' 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.578 03:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.836 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:42.837 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.837 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.837 [2024-11-05 03:23:56.472174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.095 BaseBdev2 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.095 [ 00:12:43.095 { 00:12:43.095 "name": "BaseBdev2", 00:12:43.095 "aliases": [ 00:12:43.095 "fe801a42-3cc5-4222-a510-a4c9f5d1cb03" 00:12:43.095 ], 00:12:43.095 "product_name": "Malloc disk", 00:12:43.095 "block_size": 512, 00:12:43.095 "num_blocks": 65536, 00:12:43.095 "uuid": "fe801a42-3cc5-4222-a510-a4c9f5d1cb03", 00:12:43.095 "assigned_rate_limits": { 00:12:43.095 "rw_ios_per_sec": 0, 00:12:43.095 "rw_mbytes_per_sec": 0, 00:12:43.095 "r_mbytes_per_sec": 0, 00:12:43.095 "w_mbytes_per_sec": 0 00:12:43.095 }, 00:12:43.095 "claimed": true, 00:12:43.095 "claim_type": "exclusive_write", 00:12:43.095 "zoned": false, 00:12:43.095 "supported_io_types": { 00:12:43.095 "read": true, 00:12:43.095 "write": true, 00:12:43.095 "unmap": true, 00:12:43.095 "flush": true, 00:12:43.095 "reset": true, 00:12:43.095 "nvme_admin": false, 00:12:43.095 "nvme_io": false, 00:12:43.095 "nvme_io_md": false, 00:12:43.095 "write_zeroes": true, 00:12:43.095 "zcopy": true, 00:12:43.095 "get_zone_info": false, 00:12:43.095 "zone_management": false, 00:12:43.095 "zone_append": false, 00:12:43.095 "compare": false, 00:12:43.095 "compare_and_write": false, 00:12:43.095 "abort": true, 00:12:43.095 "seek_hole": false, 00:12:43.095 "seek_data": false, 00:12:43.095 "copy": true, 00:12:43.095 "nvme_iov_md": false 00:12:43.095 }, 00:12:43.095 "memory_domains": [ 00:12:43.095 { 00:12:43.095 "dma_device_id": "system", 00:12:43.095 "dma_device_type": 1 00:12:43.095 }, 00:12:43.095 { 00:12:43.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.095 "dma_device_type": 2 00:12:43.095 } 00:12:43.095 ], 00:12:43.095 "driver_specific": {} 00:12:43.095 } 00:12:43.095 ] 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.095 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.096 "name": "Existed_Raid", 00:12:43.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.096 "strip_size_kb": 0, 00:12:43.096 "state": "configuring", 00:12:43.096 "raid_level": "raid1", 00:12:43.096 "superblock": false, 00:12:43.096 "num_base_bdevs": 4, 00:12:43.096 "num_base_bdevs_discovered": 2, 00:12:43.096 "num_base_bdevs_operational": 4, 00:12:43.096 "base_bdevs_list": [ 00:12:43.096 { 00:12:43.096 "name": "BaseBdev1", 00:12:43.096 "uuid": "37d2c638-b726-4c14-af07-3f2733eba8a2", 00:12:43.096 "is_configured": true, 00:12:43.096 "data_offset": 0, 00:12:43.096 "data_size": 65536 00:12:43.096 }, 00:12:43.096 { 00:12:43.096 "name": "BaseBdev2", 00:12:43.096 "uuid": "fe801a42-3cc5-4222-a510-a4c9f5d1cb03", 00:12:43.096 "is_configured": true, 00:12:43.096 "data_offset": 0, 00:12:43.096 "data_size": 65536 00:12:43.096 }, 00:12:43.096 { 00:12:43.096 "name": "BaseBdev3", 00:12:43.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.096 "is_configured": false, 00:12:43.096 "data_offset": 0, 00:12:43.096 "data_size": 0 00:12:43.096 }, 00:12:43.096 { 00:12:43.096 "name": "BaseBdev4", 00:12:43.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.096 "is_configured": false, 00:12:43.096 "data_offset": 0, 00:12:43.096 "data_size": 0 00:12:43.096 } 00:12:43.096 ] 00:12:43.096 }' 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.096 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.663 03:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.663 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.663 03:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.663 [2024-11-05 03:23:57.049849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.663 BaseBdev3 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.663 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.663 [ 00:12:43.663 { 00:12:43.663 "name": "BaseBdev3", 00:12:43.663 "aliases": [ 00:12:43.663 "30cb8e9e-a2f3-45e2-b384-5a73740a46fe" 00:12:43.663 ], 00:12:43.663 "product_name": "Malloc disk", 00:12:43.663 "block_size": 512, 00:12:43.663 "num_blocks": 65536, 00:12:43.663 "uuid": "30cb8e9e-a2f3-45e2-b384-5a73740a46fe", 00:12:43.663 "assigned_rate_limits": { 00:12:43.663 "rw_ios_per_sec": 0, 00:12:43.663 "rw_mbytes_per_sec": 0, 00:12:43.663 "r_mbytes_per_sec": 0, 00:12:43.663 "w_mbytes_per_sec": 0 00:12:43.663 }, 00:12:43.663 "claimed": true, 00:12:43.663 "claim_type": "exclusive_write", 00:12:43.663 "zoned": false, 00:12:43.663 "supported_io_types": { 00:12:43.663 "read": true, 00:12:43.663 "write": true, 00:12:43.663 "unmap": true, 00:12:43.663 "flush": true, 00:12:43.663 "reset": true, 00:12:43.663 "nvme_admin": false, 00:12:43.663 "nvme_io": false, 00:12:43.663 "nvme_io_md": false, 00:12:43.663 "write_zeroes": true, 00:12:43.663 "zcopy": true, 00:12:43.663 "get_zone_info": false, 00:12:43.663 "zone_management": false, 00:12:43.663 "zone_append": false, 00:12:43.663 "compare": false, 00:12:43.663 "compare_and_write": false, 00:12:43.664 "abort": true, 00:12:43.664 "seek_hole": false, 00:12:43.664 "seek_data": false, 00:12:43.664 "copy": true, 00:12:43.664 "nvme_iov_md": false 00:12:43.664 }, 00:12:43.664 "memory_domains": [ 00:12:43.664 { 00:12:43.664 "dma_device_id": "system", 00:12:43.664 "dma_device_type": 1 00:12:43.664 }, 00:12:43.664 { 00:12:43.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.664 "dma_device_type": 2 00:12:43.664 } 00:12:43.664 ], 00:12:43.664 "driver_specific": {} 00:12:43.664 } 00:12:43.664 ] 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.664 "name": "Existed_Raid", 00:12:43.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.664 "strip_size_kb": 0, 00:12:43.664 "state": "configuring", 00:12:43.664 "raid_level": "raid1", 00:12:43.664 "superblock": false, 00:12:43.664 "num_base_bdevs": 4, 00:12:43.664 "num_base_bdevs_discovered": 3, 00:12:43.664 "num_base_bdevs_operational": 4, 00:12:43.664 "base_bdevs_list": [ 00:12:43.664 { 00:12:43.664 "name": "BaseBdev1", 00:12:43.664 "uuid": "37d2c638-b726-4c14-af07-3f2733eba8a2", 00:12:43.664 "is_configured": true, 00:12:43.664 "data_offset": 0, 00:12:43.664 "data_size": 65536 00:12:43.664 }, 00:12:43.664 { 00:12:43.664 "name": "BaseBdev2", 00:12:43.664 "uuid": "fe801a42-3cc5-4222-a510-a4c9f5d1cb03", 00:12:43.664 "is_configured": true, 00:12:43.664 "data_offset": 0, 00:12:43.664 "data_size": 65536 00:12:43.664 }, 00:12:43.664 { 00:12:43.664 "name": "BaseBdev3", 00:12:43.664 "uuid": "30cb8e9e-a2f3-45e2-b384-5a73740a46fe", 00:12:43.664 "is_configured": true, 00:12:43.664 "data_offset": 0, 00:12:43.664 "data_size": 65536 00:12:43.664 }, 00:12:43.664 { 00:12:43.664 "name": "BaseBdev4", 00:12:43.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.664 "is_configured": false, 00:12:43.664 "data_offset": 0, 00:12:43.664 "data_size": 0 00:12:43.664 } 00:12:43.664 ] 00:12:43.664 }' 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.664 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.232 [2024-11-05 03:23:57.638254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.232 [2024-11-05 03:23:57.638308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.232 [2024-11-05 03:23:57.638319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:44.232 [2024-11-05 03:23:57.638976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:44.232 [2024-11-05 03:23:57.639226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.232 [2024-11-05 03:23:57.639246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:44.232 [2024-11-05 03:23:57.639608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.232 BaseBdev4 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.232 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.233 [ 00:12:44.233 { 00:12:44.233 "name": "BaseBdev4", 00:12:44.233 "aliases": [ 00:12:44.233 "3a9e341d-0b1e-4e73-b75e-026daa9f7104" 00:12:44.233 ], 00:12:44.233 "product_name": "Malloc disk", 00:12:44.233 "block_size": 512, 00:12:44.233 "num_blocks": 65536, 00:12:44.233 "uuid": "3a9e341d-0b1e-4e73-b75e-026daa9f7104", 00:12:44.233 "assigned_rate_limits": { 00:12:44.233 "rw_ios_per_sec": 0, 00:12:44.233 "rw_mbytes_per_sec": 0, 00:12:44.233 "r_mbytes_per_sec": 0, 00:12:44.233 "w_mbytes_per_sec": 0 00:12:44.233 }, 00:12:44.233 "claimed": true, 00:12:44.233 "claim_type": "exclusive_write", 00:12:44.233 "zoned": false, 00:12:44.233 "supported_io_types": { 00:12:44.233 "read": true, 00:12:44.233 "write": true, 00:12:44.233 "unmap": true, 00:12:44.233 "flush": true, 00:12:44.233 "reset": true, 00:12:44.233 "nvme_admin": false, 00:12:44.233 "nvme_io": false, 00:12:44.233 "nvme_io_md": false, 00:12:44.233 "write_zeroes": true, 00:12:44.233 "zcopy": true, 00:12:44.233 "get_zone_info": false, 00:12:44.233 "zone_management": false, 00:12:44.233 "zone_append": false, 00:12:44.233 "compare": false, 00:12:44.233 "compare_and_write": false, 00:12:44.233 "abort": true, 00:12:44.233 "seek_hole": false, 00:12:44.233 "seek_data": false, 00:12:44.233 "copy": true, 00:12:44.233 "nvme_iov_md": false 00:12:44.233 }, 00:12:44.233 "memory_domains": [ 00:12:44.233 { 00:12:44.233 "dma_device_id": "system", 00:12:44.233 "dma_device_type": 1 00:12:44.233 }, 00:12:44.233 { 00:12:44.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.233 "dma_device_type": 2 00:12:44.233 } 00:12:44.233 ], 00:12:44.233 "driver_specific": {} 00:12:44.233 } 00:12:44.233 ] 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.233 "name": "Existed_Raid", 00:12:44.233 "uuid": "9ee00649-d149-44d2-91ea-1c1f37b9bd99", 00:12:44.233 "strip_size_kb": 0, 00:12:44.233 "state": "online", 00:12:44.233 "raid_level": "raid1", 00:12:44.233 "superblock": false, 00:12:44.233 "num_base_bdevs": 4, 00:12:44.233 "num_base_bdevs_discovered": 4, 00:12:44.233 "num_base_bdevs_operational": 4, 00:12:44.233 "base_bdevs_list": [ 00:12:44.233 { 00:12:44.233 "name": "BaseBdev1", 00:12:44.233 "uuid": "37d2c638-b726-4c14-af07-3f2733eba8a2", 00:12:44.233 "is_configured": true, 00:12:44.233 "data_offset": 0, 00:12:44.233 "data_size": 65536 00:12:44.233 }, 00:12:44.233 { 00:12:44.233 "name": "BaseBdev2", 00:12:44.233 "uuid": "fe801a42-3cc5-4222-a510-a4c9f5d1cb03", 00:12:44.233 "is_configured": true, 00:12:44.233 "data_offset": 0, 00:12:44.233 "data_size": 65536 00:12:44.233 }, 00:12:44.233 { 00:12:44.233 "name": "BaseBdev3", 00:12:44.233 "uuid": "30cb8e9e-a2f3-45e2-b384-5a73740a46fe", 00:12:44.233 "is_configured": true, 00:12:44.233 "data_offset": 0, 00:12:44.233 "data_size": 65536 00:12:44.233 }, 00:12:44.233 { 00:12:44.233 "name": "BaseBdev4", 00:12:44.233 "uuid": "3a9e341d-0b1e-4e73-b75e-026daa9f7104", 00:12:44.233 "is_configured": true, 00:12:44.233 "data_offset": 0, 00:12:44.233 "data_size": 65536 00:12:44.233 } 00:12:44.233 ] 00:12:44.233 }' 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.233 03:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.801 [2024-11-05 03:23:58.198905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.801 "name": "Existed_Raid", 00:12:44.801 "aliases": [ 00:12:44.801 "9ee00649-d149-44d2-91ea-1c1f37b9bd99" 00:12:44.801 ], 00:12:44.801 "product_name": "Raid Volume", 00:12:44.801 "block_size": 512, 00:12:44.801 "num_blocks": 65536, 00:12:44.801 "uuid": "9ee00649-d149-44d2-91ea-1c1f37b9bd99", 00:12:44.801 "assigned_rate_limits": { 00:12:44.801 "rw_ios_per_sec": 0, 00:12:44.801 "rw_mbytes_per_sec": 0, 00:12:44.801 "r_mbytes_per_sec": 0, 00:12:44.801 "w_mbytes_per_sec": 0 00:12:44.801 }, 00:12:44.801 "claimed": false, 00:12:44.801 "zoned": false, 00:12:44.801 "supported_io_types": { 00:12:44.801 "read": true, 00:12:44.801 "write": true, 00:12:44.801 "unmap": false, 00:12:44.801 "flush": false, 00:12:44.801 "reset": true, 00:12:44.801 "nvme_admin": false, 00:12:44.801 "nvme_io": false, 00:12:44.801 "nvme_io_md": false, 00:12:44.801 "write_zeroes": true, 00:12:44.801 "zcopy": false, 00:12:44.801 "get_zone_info": false, 00:12:44.801 "zone_management": false, 00:12:44.801 "zone_append": false, 00:12:44.801 "compare": false, 00:12:44.801 "compare_and_write": false, 00:12:44.801 "abort": false, 00:12:44.801 "seek_hole": false, 00:12:44.801 "seek_data": false, 00:12:44.801 "copy": false, 00:12:44.801 "nvme_iov_md": false 00:12:44.801 }, 00:12:44.801 "memory_domains": [ 00:12:44.801 { 00:12:44.801 "dma_device_id": "system", 00:12:44.801 "dma_device_type": 1 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.801 "dma_device_type": 2 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "dma_device_id": "system", 00:12:44.801 "dma_device_type": 1 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.801 "dma_device_type": 2 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "dma_device_id": "system", 00:12:44.801 "dma_device_type": 1 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.801 "dma_device_type": 2 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "dma_device_id": "system", 00:12:44.801 "dma_device_type": 1 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.801 "dma_device_type": 2 00:12:44.801 } 00:12:44.801 ], 00:12:44.801 "driver_specific": { 00:12:44.801 "raid": { 00:12:44.801 "uuid": "9ee00649-d149-44d2-91ea-1c1f37b9bd99", 00:12:44.801 "strip_size_kb": 0, 00:12:44.801 "state": "online", 00:12:44.801 "raid_level": "raid1", 00:12:44.801 "superblock": false, 00:12:44.801 "num_base_bdevs": 4, 00:12:44.801 "num_base_bdevs_discovered": 4, 00:12:44.801 "num_base_bdevs_operational": 4, 00:12:44.801 "base_bdevs_list": [ 00:12:44.801 { 00:12:44.801 "name": "BaseBdev1", 00:12:44.801 "uuid": "37d2c638-b726-4c14-af07-3f2733eba8a2", 00:12:44.801 "is_configured": true, 00:12:44.801 "data_offset": 0, 00:12:44.801 "data_size": 65536 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "name": "BaseBdev2", 00:12:44.801 "uuid": "fe801a42-3cc5-4222-a510-a4c9f5d1cb03", 00:12:44.801 "is_configured": true, 00:12:44.801 "data_offset": 0, 00:12:44.801 "data_size": 65536 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "name": "BaseBdev3", 00:12:44.801 "uuid": "30cb8e9e-a2f3-45e2-b384-5a73740a46fe", 00:12:44.801 "is_configured": true, 00:12:44.801 "data_offset": 0, 00:12:44.801 "data_size": 65536 00:12:44.801 }, 00:12:44.801 { 00:12:44.801 "name": "BaseBdev4", 00:12:44.801 "uuid": "3a9e341d-0b1e-4e73-b75e-026daa9f7104", 00:12:44.801 "is_configured": true, 00:12:44.801 "data_offset": 0, 00:12:44.801 "data_size": 65536 00:12:44.801 } 00:12:44.801 ] 00:12:44.801 } 00:12:44.801 } 00:12:44.801 }' 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:44.801 BaseBdev2 00:12:44.801 BaseBdev3 00:12:44.801 BaseBdev4' 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.801 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.802 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.802 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.802 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:44.802 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.802 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.802 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.802 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 [2024-11-05 03:23:58.554636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.061 "name": "Existed_Raid", 00:12:45.061 "uuid": "9ee00649-d149-44d2-91ea-1c1f37b9bd99", 00:12:45.061 "strip_size_kb": 0, 00:12:45.061 "state": "online", 00:12:45.061 "raid_level": "raid1", 00:12:45.061 "superblock": false, 00:12:45.061 "num_base_bdevs": 4, 00:12:45.061 "num_base_bdevs_discovered": 3, 00:12:45.061 "num_base_bdevs_operational": 3, 00:12:45.061 "base_bdevs_list": [ 00:12:45.061 { 00:12:45.061 "name": null, 00:12:45.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.061 "is_configured": false, 00:12:45.061 "data_offset": 0, 00:12:45.061 "data_size": 65536 00:12:45.061 }, 00:12:45.061 { 00:12:45.061 "name": "BaseBdev2", 00:12:45.061 "uuid": "fe801a42-3cc5-4222-a510-a4c9f5d1cb03", 00:12:45.061 "is_configured": true, 00:12:45.061 "data_offset": 0, 00:12:45.061 "data_size": 65536 00:12:45.061 }, 00:12:45.061 { 00:12:45.061 "name": "BaseBdev3", 00:12:45.061 "uuid": "30cb8e9e-a2f3-45e2-b384-5a73740a46fe", 00:12:45.061 "is_configured": true, 00:12:45.061 "data_offset": 0, 00:12:45.061 "data_size": 65536 00:12:45.061 }, 00:12:45.061 { 00:12:45.061 "name": "BaseBdev4", 00:12:45.061 "uuid": "3a9e341d-0b1e-4e73-b75e-026daa9f7104", 00:12:45.061 "is_configured": true, 00:12:45.061 "data_offset": 0, 00:12:45.061 "data_size": 65536 00:12:45.061 } 00:12:45.061 ] 00:12:45.061 }' 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.061 03:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.630 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.630 [2024-11-05 03:23:59.228217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.889 [2024-11-05 03:23:59.362072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.889 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.889 [2024-11-05 03:23:59.496582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:45.889 [2024-11-05 03:23:59.496733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.903 [2024-11-05 03:23:59.570501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.903 [2024-11-05 03:23:59.570570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.903 [2024-11-05 03:23:59.570588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.903 BaseBdev2 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.903 [ 00:12:46.903 { 00:12:46.903 "name": "BaseBdev2", 00:12:46.903 "aliases": [ 00:12:46.903 "85957947-c9f3-44c9-acb9-826633cf8506" 00:12:46.903 ], 00:12:46.903 "product_name": "Malloc disk", 00:12:46.903 "block_size": 512, 00:12:46.903 "num_blocks": 65536, 00:12:46.903 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:46.903 "assigned_rate_limits": { 00:12:46.903 "rw_ios_per_sec": 0, 00:12:46.903 "rw_mbytes_per_sec": 0, 00:12:46.903 "r_mbytes_per_sec": 0, 00:12:46.903 "w_mbytes_per_sec": 0 00:12:46.903 }, 00:12:46.903 "claimed": false, 00:12:46.903 "zoned": false, 00:12:46.903 "supported_io_types": { 00:12:46.903 "read": true, 00:12:46.903 "write": true, 00:12:46.903 "unmap": true, 00:12:46.903 "flush": true, 00:12:46.903 "reset": true, 00:12:46.903 "nvme_admin": false, 00:12:46.903 "nvme_io": false, 00:12:46.903 "nvme_io_md": false, 00:12:46.903 "write_zeroes": true, 00:12:46.903 "zcopy": true, 00:12:46.903 "get_zone_info": false, 00:12:46.903 "zone_management": false, 00:12:46.903 "zone_append": false, 00:12:46.903 "compare": false, 00:12:46.903 "compare_and_write": false, 00:12:46.903 "abort": true, 00:12:46.903 "seek_hole": false, 00:12:46.903 "seek_data": false, 00:12:46.903 "copy": true, 00:12:46.903 "nvme_iov_md": false 00:12:46.903 }, 00:12:46.903 "memory_domains": [ 00:12:46.903 { 00:12:46.903 "dma_device_id": "system", 00:12:46.903 "dma_device_type": 1 00:12:46.903 }, 00:12:46.903 { 00:12:46.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.903 "dma_device_type": 2 00:12:46.903 } 00:12:46.903 ], 00:12:46.903 "driver_specific": {} 00:12:46.903 } 00:12:46.903 ] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.903 BaseBdev3 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.903 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 [ 00:12:46.904 { 00:12:46.904 "name": "BaseBdev3", 00:12:46.904 "aliases": [ 00:12:46.904 "56c44d3e-c892-4bdd-ab50-87496c8e3b75" 00:12:46.904 ], 00:12:46.904 "product_name": "Malloc disk", 00:12:46.904 "block_size": 512, 00:12:46.904 "num_blocks": 65536, 00:12:46.904 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:46.904 "assigned_rate_limits": { 00:12:46.904 "rw_ios_per_sec": 0, 00:12:46.904 "rw_mbytes_per_sec": 0, 00:12:46.904 "r_mbytes_per_sec": 0, 00:12:46.904 "w_mbytes_per_sec": 0 00:12:46.904 }, 00:12:46.904 "claimed": false, 00:12:46.904 "zoned": false, 00:12:46.904 "supported_io_types": { 00:12:46.904 "read": true, 00:12:46.904 "write": true, 00:12:46.904 "unmap": true, 00:12:46.904 "flush": true, 00:12:46.904 "reset": true, 00:12:46.904 "nvme_admin": false, 00:12:46.904 "nvme_io": false, 00:12:46.904 "nvme_io_md": false, 00:12:46.904 "write_zeroes": true, 00:12:46.904 "zcopy": true, 00:12:46.904 "get_zone_info": false, 00:12:46.904 "zone_management": false, 00:12:46.904 "zone_append": false, 00:12:46.904 "compare": false, 00:12:46.904 "compare_and_write": false, 00:12:46.904 "abort": true, 00:12:46.904 "seek_hole": false, 00:12:46.904 "seek_data": false, 00:12:46.904 "copy": true, 00:12:46.904 "nvme_iov_md": false 00:12:46.904 }, 00:12:46.904 "memory_domains": [ 00:12:46.904 { 00:12:46.904 "dma_device_id": "system", 00:12:46.904 "dma_device_type": 1 00:12:46.904 }, 00:12:46.904 { 00:12:46.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.904 "dma_device_type": 2 00:12:46.904 } 00:12:46.904 ], 00:12:46.904 "driver_specific": {} 00:12:46.904 } 00:12:46.904 ] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 BaseBdev4 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 [ 00:12:46.904 { 00:12:46.904 "name": "BaseBdev4", 00:12:46.904 "aliases": [ 00:12:46.904 "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44" 00:12:46.904 ], 00:12:46.904 "product_name": "Malloc disk", 00:12:46.904 "block_size": 512, 00:12:46.904 "num_blocks": 65536, 00:12:46.904 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:46.904 "assigned_rate_limits": { 00:12:46.904 "rw_ios_per_sec": 0, 00:12:46.904 "rw_mbytes_per_sec": 0, 00:12:46.904 "r_mbytes_per_sec": 0, 00:12:46.904 "w_mbytes_per_sec": 0 00:12:46.904 }, 00:12:46.904 "claimed": false, 00:12:46.904 "zoned": false, 00:12:46.904 "supported_io_types": { 00:12:46.904 "read": true, 00:12:46.904 "write": true, 00:12:46.904 "unmap": true, 00:12:46.904 "flush": true, 00:12:46.904 "reset": true, 00:12:46.904 "nvme_admin": false, 00:12:46.904 "nvme_io": false, 00:12:46.904 "nvme_io_md": false, 00:12:46.904 "write_zeroes": true, 00:12:46.904 "zcopy": true, 00:12:46.904 "get_zone_info": false, 00:12:46.904 "zone_management": false, 00:12:46.904 "zone_append": false, 00:12:46.904 "compare": false, 00:12:46.904 "compare_and_write": false, 00:12:46.904 "abort": true, 00:12:46.904 "seek_hole": false, 00:12:46.904 "seek_data": false, 00:12:46.904 "copy": true, 00:12:46.904 "nvme_iov_md": false 00:12:46.904 }, 00:12:46.904 "memory_domains": [ 00:12:46.904 { 00:12:46.904 "dma_device_id": "system", 00:12:46.904 "dma_device_type": 1 00:12:46.904 }, 00:12:46.904 { 00:12:46.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.904 "dma_device_type": 2 00:12:46.904 } 00:12:46.904 ], 00:12:46.904 "driver_specific": {} 00:12:46.904 } 00:12:46.904 ] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 [2024-11-05 03:23:59.849794] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.904 [2024-11-05 03:23:59.850050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.904 [2024-11-05 03:23:59.850175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.904 [2024-11-05 03:23:59.852708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.904 [2024-11-05 03:23:59.852901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.904 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.904 "name": "Existed_Raid", 00:12:46.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.904 "strip_size_kb": 0, 00:12:46.904 "state": "configuring", 00:12:46.904 "raid_level": "raid1", 00:12:46.904 "superblock": false, 00:12:46.904 "num_base_bdevs": 4, 00:12:46.904 "num_base_bdevs_discovered": 3, 00:12:46.904 "num_base_bdevs_operational": 4, 00:12:46.904 "base_bdevs_list": [ 00:12:46.904 { 00:12:46.904 "name": "BaseBdev1", 00:12:46.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.905 "is_configured": false, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 0 00:12:46.905 }, 00:12:46.905 { 00:12:46.905 "name": "BaseBdev2", 00:12:46.905 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:46.905 "is_configured": true, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 65536 00:12:46.905 }, 00:12:46.905 { 00:12:46.905 "name": "BaseBdev3", 00:12:46.905 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:46.905 "is_configured": true, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 65536 00:12:46.905 }, 00:12:46.905 { 00:12:46.905 "name": "BaseBdev4", 00:12:46.905 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:46.905 "is_configured": true, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 65536 00:12:46.905 } 00:12:46.905 ] 00:12:46.905 }' 00:12:46.905 03:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.905 03:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.905 [2024-11-05 03:24:00.390054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.905 "name": "Existed_Raid", 00:12:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.905 "strip_size_kb": 0, 00:12:46.905 "state": "configuring", 00:12:46.905 "raid_level": "raid1", 00:12:46.905 "superblock": false, 00:12:46.905 "num_base_bdevs": 4, 00:12:46.905 "num_base_bdevs_discovered": 2, 00:12:46.905 "num_base_bdevs_operational": 4, 00:12:46.905 "base_bdevs_list": [ 00:12:46.905 { 00:12:46.905 "name": "BaseBdev1", 00:12:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.905 "is_configured": false, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 0 00:12:46.905 }, 00:12:46.905 { 00:12:46.905 "name": null, 00:12:46.905 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:46.905 "is_configured": false, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 65536 00:12:46.905 }, 00:12:46.905 { 00:12:46.905 "name": "BaseBdev3", 00:12:46.905 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:46.905 "is_configured": true, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 65536 00:12:46.905 }, 00:12:46.905 { 00:12:46.905 "name": "BaseBdev4", 00:12:46.905 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:46.905 "is_configured": true, 00:12:46.905 "data_offset": 0, 00:12:46.905 "data_size": 65536 00:12:46.905 } 00:12:46.905 ] 00:12:46.905 }' 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.905 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.520 03:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.520 [2024-11-05 03:24:01.005964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.520 BaseBdev1 00:12:47.520 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.520 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:47.520 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:47.520 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.520 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:47.520 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.520 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.521 [ 00:12:47.521 { 00:12:47.521 "name": "BaseBdev1", 00:12:47.521 "aliases": [ 00:12:47.521 "0cae3413-e55d-494a-b533-5083c3d07b6c" 00:12:47.521 ], 00:12:47.521 "product_name": "Malloc disk", 00:12:47.521 "block_size": 512, 00:12:47.521 "num_blocks": 65536, 00:12:47.521 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:47.521 "assigned_rate_limits": { 00:12:47.521 "rw_ios_per_sec": 0, 00:12:47.521 "rw_mbytes_per_sec": 0, 00:12:47.521 "r_mbytes_per_sec": 0, 00:12:47.521 "w_mbytes_per_sec": 0 00:12:47.521 }, 00:12:47.521 "claimed": true, 00:12:47.521 "claim_type": "exclusive_write", 00:12:47.521 "zoned": false, 00:12:47.521 "supported_io_types": { 00:12:47.521 "read": true, 00:12:47.521 "write": true, 00:12:47.521 "unmap": true, 00:12:47.521 "flush": true, 00:12:47.521 "reset": true, 00:12:47.521 "nvme_admin": false, 00:12:47.521 "nvme_io": false, 00:12:47.521 "nvme_io_md": false, 00:12:47.521 "write_zeroes": true, 00:12:47.521 "zcopy": true, 00:12:47.521 "get_zone_info": false, 00:12:47.521 "zone_management": false, 00:12:47.521 "zone_append": false, 00:12:47.521 "compare": false, 00:12:47.521 "compare_and_write": false, 00:12:47.521 "abort": true, 00:12:47.521 "seek_hole": false, 00:12:47.521 "seek_data": false, 00:12:47.521 "copy": true, 00:12:47.521 "nvme_iov_md": false 00:12:47.521 }, 00:12:47.521 "memory_domains": [ 00:12:47.521 { 00:12:47.521 "dma_device_id": "system", 00:12:47.521 "dma_device_type": 1 00:12:47.521 }, 00:12:47.521 { 00:12:47.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.521 "dma_device_type": 2 00:12:47.521 } 00:12:47.521 ], 00:12:47.521 "driver_specific": {} 00:12:47.521 } 00:12:47.521 ] 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.521 "name": "Existed_Raid", 00:12:47.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.521 "strip_size_kb": 0, 00:12:47.521 "state": "configuring", 00:12:47.521 "raid_level": "raid1", 00:12:47.521 "superblock": false, 00:12:47.521 "num_base_bdevs": 4, 00:12:47.521 "num_base_bdevs_discovered": 3, 00:12:47.521 "num_base_bdevs_operational": 4, 00:12:47.521 "base_bdevs_list": [ 00:12:47.521 { 00:12:47.521 "name": "BaseBdev1", 00:12:47.521 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:47.521 "is_configured": true, 00:12:47.521 "data_offset": 0, 00:12:47.521 "data_size": 65536 00:12:47.521 }, 00:12:47.521 { 00:12:47.521 "name": null, 00:12:47.521 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:47.521 "is_configured": false, 00:12:47.521 "data_offset": 0, 00:12:47.521 "data_size": 65536 00:12:47.521 }, 00:12:47.521 { 00:12:47.521 "name": "BaseBdev3", 00:12:47.521 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:47.521 "is_configured": true, 00:12:47.521 "data_offset": 0, 00:12:47.521 "data_size": 65536 00:12:47.521 }, 00:12:47.521 { 00:12:47.521 "name": "BaseBdev4", 00:12:47.521 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:47.521 "is_configured": true, 00:12:47.521 "data_offset": 0, 00:12:47.521 "data_size": 65536 00:12:47.521 } 00:12:47.521 ] 00:12:47.521 }' 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.521 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.089 [2024-11-05 03:24:01.618183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.089 "name": "Existed_Raid", 00:12:48.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.089 "strip_size_kb": 0, 00:12:48.089 "state": "configuring", 00:12:48.089 "raid_level": "raid1", 00:12:48.089 "superblock": false, 00:12:48.089 "num_base_bdevs": 4, 00:12:48.089 "num_base_bdevs_discovered": 2, 00:12:48.089 "num_base_bdevs_operational": 4, 00:12:48.089 "base_bdevs_list": [ 00:12:48.089 { 00:12:48.089 "name": "BaseBdev1", 00:12:48.089 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:48.089 "is_configured": true, 00:12:48.089 "data_offset": 0, 00:12:48.089 "data_size": 65536 00:12:48.089 }, 00:12:48.089 { 00:12:48.089 "name": null, 00:12:48.089 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:48.089 "is_configured": false, 00:12:48.089 "data_offset": 0, 00:12:48.089 "data_size": 65536 00:12:48.089 }, 00:12:48.089 { 00:12:48.089 "name": null, 00:12:48.089 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:48.089 "is_configured": false, 00:12:48.089 "data_offset": 0, 00:12:48.089 "data_size": 65536 00:12:48.089 }, 00:12:48.089 { 00:12:48.089 "name": "BaseBdev4", 00:12:48.089 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:48.089 "is_configured": true, 00:12:48.089 "data_offset": 0, 00:12:48.089 "data_size": 65536 00:12:48.089 } 00:12:48.089 ] 00:12:48.089 }' 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.089 03:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.658 [2024-11-05 03:24:02.206297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.658 "name": "Existed_Raid", 00:12:48.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.658 "strip_size_kb": 0, 00:12:48.658 "state": "configuring", 00:12:48.658 "raid_level": "raid1", 00:12:48.658 "superblock": false, 00:12:48.658 "num_base_bdevs": 4, 00:12:48.658 "num_base_bdevs_discovered": 3, 00:12:48.658 "num_base_bdevs_operational": 4, 00:12:48.658 "base_bdevs_list": [ 00:12:48.658 { 00:12:48.658 "name": "BaseBdev1", 00:12:48.658 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:48.658 "is_configured": true, 00:12:48.658 "data_offset": 0, 00:12:48.658 "data_size": 65536 00:12:48.658 }, 00:12:48.658 { 00:12:48.658 "name": null, 00:12:48.658 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:48.658 "is_configured": false, 00:12:48.658 "data_offset": 0, 00:12:48.658 "data_size": 65536 00:12:48.658 }, 00:12:48.658 { 00:12:48.658 "name": "BaseBdev3", 00:12:48.658 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:48.658 "is_configured": true, 00:12:48.658 "data_offset": 0, 00:12:48.658 "data_size": 65536 00:12:48.658 }, 00:12:48.658 { 00:12:48.658 "name": "BaseBdev4", 00:12:48.658 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:48.658 "is_configured": true, 00:12:48.658 "data_offset": 0, 00:12:48.658 "data_size": 65536 00:12:48.658 } 00:12:48.658 ] 00:12:48.658 }' 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.658 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 [2024-11-05 03:24:02.782563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.485 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.485 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.485 "name": "Existed_Raid", 00:12:49.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.485 "strip_size_kb": 0, 00:12:49.485 "state": "configuring", 00:12:49.485 "raid_level": "raid1", 00:12:49.485 "superblock": false, 00:12:49.485 "num_base_bdevs": 4, 00:12:49.485 "num_base_bdevs_discovered": 2, 00:12:49.485 "num_base_bdevs_operational": 4, 00:12:49.485 "base_bdevs_list": [ 00:12:49.485 { 00:12:49.485 "name": null, 00:12:49.485 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:49.485 "is_configured": false, 00:12:49.485 "data_offset": 0, 00:12:49.485 "data_size": 65536 00:12:49.485 }, 00:12:49.485 { 00:12:49.485 "name": null, 00:12:49.485 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:49.485 "is_configured": false, 00:12:49.485 "data_offset": 0, 00:12:49.485 "data_size": 65536 00:12:49.485 }, 00:12:49.485 { 00:12:49.485 "name": "BaseBdev3", 00:12:49.485 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:49.485 "is_configured": true, 00:12:49.485 "data_offset": 0, 00:12:49.485 "data_size": 65536 00:12:49.485 }, 00:12:49.485 { 00:12:49.485 "name": "BaseBdev4", 00:12:49.485 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:49.485 "is_configured": true, 00:12:49.485 "data_offset": 0, 00:12:49.485 "data_size": 65536 00:12:49.485 } 00:12:49.485 ] 00:12:49.485 }' 00:12:49.485 03:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.485 03:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.053 [2024-11-05 03:24:03.438408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.053 "name": "Existed_Raid", 00:12:50.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.053 "strip_size_kb": 0, 00:12:50.053 "state": "configuring", 00:12:50.053 "raid_level": "raid1", 00:12:50.053 "superblock": false, 00:12:50.053 "num_base_bdevs": 4, 00:12:50.053 "num_base_bdevs_discovered": 3, 00:12:50.053 "num_base_bdevs_operational": 4, 00:12:50.053 "base_bdevs_list": [ 00:12:50.053 { 00:12:50.053 "name": null, 00:12:50.053 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:50.053 "is_configured": false, 00:12:50.053 "data_offset": 0, 00:12:50.053 "data_size": 65536 00:12:50.053 }, 00:12:50.053 { 00:12:50.053 "name": "BaseBdev2", 00:12:50.053 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:50.053 "is_configured": true, 00:12:50.053 "data_offset": 0, 00:12:50.053 "data_size": 65536 00:12:50.053 }, 00:12:50.053 { 00:12:50.053 "name": "BaseBdev3", 00:12:50.053 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:50.053 "is_configured": true, 00:12:50.053 "data_offset": 0, 00:12:50.053 "data_size": 65536 00:12:50.053 }, 00:12:50.053 { 00:12:50.053 "name": "BaseBdev4", 00:12:50.053 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:50.053 "is_configured": true, 00:12:50.053 "data_offset": 0, 00:12:50.053 "data_size": 65536 00:12:50.053 } 00:12:50.053 ] 00:12:50.053 }' 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.053 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.621 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.621 03:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.621 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.621 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.621 03:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0cae3413-e55d-494a-b533-5083c3d07b6c 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.621 [2024-11-05 03:24:04.101908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:50.621 [2024-11-05 03:24:04.101954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:50.621 [2024-11-05 03:24:04.101968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:50.621 [2024-11-05 03:24:04.102259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:50.621 [2024-11-05 03:24:04.102525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:50.621 [2024-11-05 03:24:04.102541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:50.621 [2024-11-05 03:24:04.102882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.621 NewBaseBdev 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.621 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.622 [ 00:12:50.622 { 00:12:50.622 "name": "NewBaseBdev", 00:12:50.622 "aliases": [ 00:12:50.622 "0cae3413-e55d-494a-b533-5083c3d07b6c" 00:12:50.622 ], 00:12:50.622 "product_name": "Malloc disk", 00:12:50.622 "block_size": 512, 00:12:50.622 "num_blocks": 65536, 00:12:50.622 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:50.622 "assigned_rate_limits": { 00:12:50.622 "rw_ios_per_sec": 0, 00:12:50.622 "rw_mbytes_per_sec": 0, 00:12:50.622 "r_mbytes_per_sec": 0, 00:12:50.622 "w_mbytes_per_sec": 0 00:12:50.622 }, 00:12:50.622 "claimed": true, 00:12:50.622 "claim_type": "exclusive_write", 00:12:50.622 "zoned": false, 00:12:50.622 "supported_io_types": { 00:12:50.622 "read": true, 00:12:50.622 "write": true, 00:12:50.622 "unmap": true, 00:12:50.622 "flush": true, 00:12:50.622 "reset": true, 00:12:50.622 "nvme_admin": false, 00:12:50.622 "nvme_io": false, 00:12:50.622 "nvme_io_md": false, 00:12:50.622 "write_zeroes": true, 00:12:50.622 "zcopy": true, 00:12:50.622 "get_zone_info": false, 00:12:50.622 "zone_management": false, 00:12:50.622 "zone_append": false, 00:12:50.622 "compare": false, 00:12:50.622 "compare_and_write": false, 00:12:50.622 "abort": true, 00:12:50.622 "seek_hole": false, 00:12:50.622 "seek_data": false, 00:12:50.622 "copy": true, 00:12:50.622 "nvme_iov_md": false 00:12:50.622 }, 00:12:50.622 "memory_domains": [ 00:12:50.622 { 00:12:50.622 "dma_device_id": "system", 00:12:50.622 "dma_device_type": 1 00:12:50.622 }, 00:12:50.622 { 00:12:50.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.622 "dma_device_type": 2 00:12:50.622 } 00:12:50.622 ], 00:12:50.622 "driver_specific": {} 00:12:50.622 } 00:12:50.622 ] 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.622 "name": "Existed_Raid", 00:12:50.622 "uuid": "1cdc2148-949f-416c-85ac-bafa920263b3", 00:12:50.622 "strip_size_kb": 0, 00:12:50.622 "state": "online", 00:12:50.622 "raid_level": "raid1", 00:12:50.622 "superblock": false, 00:12:50.622 "num_base_bdevs": 4, 00:12:50.622 "num_base_bdevs_discovered": 4, 00:12:50.622 "num_base_bdevs_operational": 4, 00:12:50.622 "base_bdevs_list": [ 00:12:50.622 { 00:12:50.622 "name": "NewBaseBdev", 00:12:50.622 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:50.622 "is_configured": true, 00:12:50.622 "data_offset": 0, 00:12:50.622 "data_size": 65536 00:12:50.622 }, 00:12:50.622 { 00:12:50.622 "name": "BaseBdev2", 00:12:50.622 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:50.622 "is_configured": true, 00:12:50.622 "data_offset": 0, 00:12:50.622 "data_size": 65536 00:12:50.622 }, 00:12:50.622 { 00:12:50.622 "name": "BaseBdev3", 00:12:50.622 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:50.622 "is_configured": true, 00:12:50.622 "data_offset": 0, 00:12:50.622 "data_size": 65536 00:12:50.622 }, 00:12:50.622 { 00:12:50.622 "name": "BaseBdev4", 00:12:50.622 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:50.622 "is_configured": true, 00:12:50.622 "data_offset": 0, 00:12:50.622 "data_size": 65536 00:12:50.622 } 00:12:50.622 ] 00:12:50.622 }' 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.622 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.191 [2024-11-05 03:24:04.690610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.191 "name": "Existed_Raid", 00:12:51.191 "aliases": [ 00:12:51.191 "1cdc2148-949f-416c-85ac-bafa920263b3" 00:12:51.191 ], 00:12:51.191 "product_name": "Raid Volume", 00:12:51.191 "block_size": 512, 00:12:51.191 "num_blocks": 65536, 00:12:51.191 "uuid": "1cdc2148-949f-416c-85ac-bafa920263b3", 00:12:51.191 "assigned_rate_limits": { 00:12:51.191 "rw_ios_per_sec": 0, 00:12:51.191 "rw_mbytes_per_sec": 0, 00:12:51.191 "r_mbytes_per_sec": 0, 00:12:51.191 "w_mbytes_per_sec": 0 00:12:51.191 }, 00:12:51.191 "claimed": false, 00:12:51.191 "zoned": false, 00:12:51.191 "supported_io_types": { 00:12:51.191 "read": true, 00:12:51.191 "write": true, 00:12:51.191 "unmap": false, 00:12:51.191 "flush": false, 00:12:51.191 "reset": true, 00:12:51.191 "nvme_admin": false, 00:12:51.191 "nvme_io": false, 00:12:51.191 "nvme_io_md": false, 00:12:51.191 "write_zeroes": true, 00:12:51.191 "zcopy": false, 00:12:51.191 "get_zone_info": false, 00:12:51.191 "zone_management": false, 00:12:51.191 "zone_append": false, 00:12:51.191 "compare": false, 00:12:51.191 "compare_and_write": false, 00:12:51.191 "abort": false, 00:12:51.191 "seek_hole": false, 00:12:51.191 "seek_data": false, 00:12:51.191 "copy": false, 00:12:51.191 "nvme_iov_md": false 00:12:51.191 }, 00:12:51.191 "memory_domains": [ 00:12:51.191 { 00:12:51.191 "dma_device_id": "system", 00:12:51.191 "dma_device_type": 1 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.191 "dma_device_type": 2 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "dma_device_id": "system", 00:12:51.191 "dma_device_type": 1 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.191 "dma_device_type": 2 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "dma_device_id": "system", 00:12:51.191 "dma_device_type": 1 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.191 "dma_device_type": 2 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "dma_device_id": "system", 00:12:51.191 "dma_device_type": 1 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.191 "dma_device_type": 2 00:12:51.191 } 00:12:51.191 ], 00:12:51.191 "driver_specific": { 00:12:51.191 "raid": { 00:12:51.191 "uuid": "1cdc2148-949f-416c-85ac-bafa920263b3", 00:12:51.191 "strip_size_kb": 0, 00:12:51.191 "state": "online", 00:12:51.191 "raid_level": "raid1", 00:12:51.191 "superblock": false, 00:12:51.191 "num_base_bdevs": 4, 00:12:51.191 "num_base_bdevs_discovered": 4, 00:12:51.191 "num_base_bdevs_operational": 4, 00:12:51.191 "base_bdevs_list": [ 00:12:51.191 { 00:12:51.191 "name": "NewBaseBdev", 00:12:51.191 "uuid": "0cae3413-e55d-494a-b533-5083c3d07b6c", 00:12:51.191 "is_configured": true, 00:12:51.191 "data_offset": 0, 00:12:51.191 "data_size": 65536 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "name": "BaseBdev2", 00:12:51.191 "uuid": "85957947-c9f3-44c9-acb9-826633cf8506", 00:12:51.191 "is_configured": true, 00:12:51.191 "data_offset": 0, 00:12:51.191 "data_size": 65536 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "name": "BaseBdev3", 00:12:51.191 "uuid": "56c44d3e-c892-4bdd-ab50-87496c8e3b75", 00:12:51.191 "is_configured": true, 00:12:51.191 "data_offset": 0, 00:12:51.191 "data_size": 65536 00:12:51.191 }, 00:12:51.191 { 00:12:51.191 "name": "BaseBdev4", 00:12:51.191 "uuid": "ca6f7e30-e0a4-49e6-bb37-a8c23f825c44", 00:12:51.191 "is_configured": true, 00:12:51.191 "data_offset": 0, 00:12:51.191 "data_size": 65536 00:12:51.191 } 00:12:51.191 ] 00:12:51.191 } 00:12:51.191 } 00:12:51.191 }' 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:51.191 BaseBdev2 00:12:51.191 BaseBdev3 00:12:51.191 BaseBdev4' 00:12:51.191 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.451 03:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.451 [2024-11-05 03:24:05.066219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.451 [2024-11-05 03:24:05.066250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.451 [2024-11-05 03:24:05.066367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.451 [2024-11-05 03:24:05.066784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.451 [2024-11-05 03:24:05.066804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73070 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73070 ']' 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73070 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.451 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73070 00:12:51.710 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:51.710 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:51.710 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73070' 00:12:51.710 killing process with pid 73070 00:12:51.710 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73070 00:12:51.710 [2024-11-05 03:24:05.108500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.710 03:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73070 00:12:51.990 [2024-11-05 03:24:05.406084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:52.931 00:12:52.931 real 0m12.612s 00:12:52.931 user 0m21.288s 00:12:52.931 sys 0m1.613s 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.931 ************************************ 00:12:52.931 END TEST raid_state_function_test 00:12:52.931 ************************************ 00:12:52.931 03:24:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:52.931 03:24:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:52.931 03:24:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.931 03:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.931 ************************************ 00:12:52.931 START TEST raid_state_function_test_sb 00:12:52.931 ************************************ 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:52.931 Process raid pid: 73747 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:52.931 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73747 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73747' 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73747 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73747 ']' 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.932 03:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.932 [2024-11-05 03:24:06.492960] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:52.932 [2024-11-05 03:24:06.493457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.191 [2024-11-05 03:24:06.675934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.191 [2024-11-05 03:24:06.786380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.450 [2024-11-05 03:24:06.985382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.450 [2024-11-05 03:24:06.985418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.018 [2024-11-05 03:24:07.439918] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.018 [2024-11-05 03:24:07.439991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.018 [2024-11-05 03:24:07.440007] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.018 [2024-11-05 03:24:07.440021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.018 [2024-11-05 03:24:07.440030] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.018 [2024-11-05 03:24:07.440042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.018 [2024-11-05 03:24:07.440050] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:54.018 [2024-11-05 03:24:07.440063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.018 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.019 "name": "Existed_Raid", 00:12:54.019 "uuid": "7d30cc33-5fc3-4df8-91ca-4681d7084639", 00:12:54.019 "strip_size_kb": 0, 00:12:54.019 "state": "configuring", 00:12:54.019 "raid_level": "raid1", 00:12:54.019 "superblock": true, 00:12:54.019 "num_base_bdevs": 4, 00:12:54.019 "num_base_bdevs_discovered": 0, 00:12:54.019 "num_base_bdevs_operational": 4, 00:12:54.019 "base_bdevs_list": [ 00:12:54.019 { 00:12:54.019 "name": "BaseBdev1", 00:12:54.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.019 "is_configured": false, 00:12:54.019 "data_offset": 0, 00:12:54.019 "data_size": 0 00:12:54.019 }, 00:12:54.019 { 00:12:54.019 "name": "BaseBdev2", 00:12:54.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.019 "is_configured": false, 00:12:54.019 "data_offset": 0, 00:12:54.019 "data_size": 0 00:12:54.019 }, 00:12:54.019 { 00:12:54.019 "name": "BaseBdev3", 00:12:54.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.019 "is_configured": false, 00:12:54.019 "data_offset": 0, 00:12:54.019 "data_size": 0 00:12:54.019 }, 00:12:54.019 { 00:12:54.019 "name": "BaseBdev4", 00:12:54.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.019 "is_configured": false, 00:12:54.019 "data_offset": 0, 00:12:54.019 "data_size": 0 00:12:54.019 } 00:12:54.019 ] 00:12:54.019 }' 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.019 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 [2024-11-05 03:24:07.972006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.587 [2024-11-05 03:24:07.972047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 [2024-11-05 03:24:07.980042] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.587 [2024-11-05 03:24:07.980258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.587 [2024-11-05 03:24:07.980426] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.587 [2024-11-05 03:24:07.980488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.587 [2024-11-05 03:24:07.980693] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.587 [2024-11-05 03:24:07.980726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.587 [2024-11-05 03:24:07.980738] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:54.587 [2024-11-05 03:24:07.980753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.587 03:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 [2024-11-05 03:24:08.027139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.587 BaseBdev1 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 [ 00:12:54.587 { 00:12:54.587 "name": "BaseBdev1", 00:12:54.587 "aliases": [ 00:12:54.587 "6fabc966-4d09-4409-a473-e693bacd536f" 00:12:54.587 ], 00:12:54.587 "product_name": "Malloc disk", 00:12:54.587 "block_size": 512, 00:12:54.587 "num_blocks": 65536, 00:12:54.587 "uuid": "6fabc966-4d09-4409-a473-e693bacd536f", 00:12:54.587 "assigned_rate_limits": { 00:12:54.587 "rw_ios_per_sec": 0, 00:12:54.587 "rw_mbytes_per_sec": 0, 00:12:54.587 "r_mbytes_per_sec": 0, 00:12:54.587 "w_mbytes_per_sec": 0 00:12:54.587 }, 00:12:54.587 "claimed": true, 00:12:54.587 "claim_type": "exclusive_write", 00:12:54.587 "zoned": false, 00:12:54.587 "supported_io_types": { 00:12:54.587 "read": true, 00:12:54.587 "write": true, 00:12:54.587 "unmap": true, 00:12:54.587 "flush": true, 00:12:54.587 "reset": true, 00:12:54.587 "nvme_admin": false, 00:12:54.587 "nvme_io": false, 00:12:54.587 "nvme_io_md": false, 00:12:54.587 "write_zeroes": true, 00:12:54.587 "zcopy": true, 00:12:54.587 "get_zone_info": false, 00:12:54.587 "zone_management": false, 00:12:54.587 "zone_append": false, 00:12:54.587 "compare": false, 00:12:54.587 "compare_and_write": false, 00:12:54.587 "abort": true, 00:12:54.587 "seek_hole": false, 00:12:54.587 "seek_data": false, 00:12:54.587 "copy": true, 00:12:54.587 "nvme_iov_md": false 00:12:54.587 }, 00:12:54.587 "memory_domains": [ 00:12:54.587 { 00:12:54.587 "dma_device_id": "system", 00:12:54.587 "dma_device_type": 1 00:12:54.587 }, 00:12:54.587 { 00:12:54.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.587 "dma_device_type": 2 00:12:54.587 } 00:12:54.587 ], 00:12:54.587 "driver_specific": {} 00:12:54.587 } 00:12:54.587 ] 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.587 "name": "Existed_Raid", 00:12:54.587 "uuid": "b040b8e3-9eed-4b4e-b3d4-e54e1dd998f1", 00:12:54.587 "strip_size_kb": 0, 00:12:54.587 "state": "configuring", 00:12:54.587 "raid_level": "raid1", 00:12:54.587 "superblock": true, 00:12:54.587 "num_base_bdevs": 4, 00:12:54.587 "num_base_bdevs_discovered": 1, 00:12:54.587 "num_base_bdevs_operational": 4, 00:12:54.587 "base_bdevs_list": [ 00:12:54.587 { 00:12:54.587 "name": "BaseBdev1", 00:12:54.587 "uuid": "6fabc966-4d09-4409-a473-e693bacd536f", 00:12:54.587 "is_configured": true, 00:12:54.587 "data_offset": 2048, 00:12:54.587 "data_size": 63488 00:12:54.587 }, 00:12:54.587 { 00:12:54.587 "name": "BaseBdev2", 00:12:54.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.587 "is_configured": false, 00:12:54.587 "data_offset": 0, 00:12:54.587 "data_size": 0 00:12:54.587 }, 00:12:54.587 { 00:12:54.587 "name": "BaseBdev3", 00:12:54.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.587 "is_configured": false, 00:12:54.587 "data_offset": 0, 00:12:54.587 "data_size": 0 00:12:54.587 }, 00:12:54.587 { 00:12:54.587 "name": "BaseBdev4", 00:12:54.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.587 "is_configured": false, 00:12:54.587 "data_offset": 0, 00:12:54.587 "data_size": 0 00:12:54.587 } 00:12:54.587 ] 00:12:54.587 }' 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.587 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.156 [2024-11-05 03:24:08.575221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.156 [2024-11-05 03:24:08.575460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.156 [2024-11-05 03:24:08.583302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.156 [2024-11-05 03:24:08.585727] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.156 [2024-11-05 03:24:08.585782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.156 [2024-11-05 03:24:08.585798] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.156 [2024-11-05 03:24:08.585829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.156 [2024-11-05 03:24:08.585855] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:55.156 [2024-11-05 03:24:08.585882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.156 "name": "Existed_Raid", 00:12:55.156 "uuid": "10515190-f495-4d42-9ac0-891dfa207a2e", 00:12:55.156 "strip_size_kb": 0, 00:12:55.156 "state": "configuring", 00:12:55.156 "raid_level": "raid1", 00:12:55.156 "superblock": true, 00:12:55.156 "num_base_bdevs": 4, 00:12:55.156 "num_base_bdevs_discovered": 1, 00:12:55.156 "num_base_bdevs_operational": 4, 00:12:55.156 "base_bdevs_list": [ 00:12:55.156 { 00:12:55.156 "name": "BaseBdev1", 00:12:55.156 "uuid": "6fabc966-4d09-4409-a473-e693bacd536f", 00:12:55.156 "is_configured": true, 00:12:55.156 "data_offset": 2048, 00:12:55.156 "data_size": 63488 00:12:55.156 }, 00:12:55.156 { 00:12:55.156 "name": "BaseBdev2", 00:12:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.156 "is_configured": false, 00:12:55.156 "data_offset": 0, 00:12:55.156 "data_size": 0 00:12:55.156 }, 00:12:55.156 { 00:12:55.156 "name": "BaseBdev3", 00:12:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.156 "is_configured": false, 00:12:55.156 "data_offset": 0, 00:12:55.156 "data_size": 0 00:12:55.156 }, 00:12:55.156 { 00:12:55.156 "name": "BaseBdev4", 00:12:55.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.156 "is_configured": false, 00:12:55.156 "data_offset": 0, 00:12:55.156 "data_size": 0 00:12:55.156 } 00:12:55.156 ] 00:12:55.156 }' 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.156 03:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.725 [2024-11-05 03:24:09.165530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.725 BaseBdev2 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.725 [ 00:12:55.725 { 00:12:55.725 "name": "BaseBdev2", 00:12:55.725 "aliases": [ 00:12:55.725 "92f73776-ada2-4df7-9f7e-c77db6daed49" 00:12:55.725 ], 00:12:55.725 "product_name": "Malloc disk", 00:12:55.725 "block_size": 512, 00:12:55.725 "num_blocks": 65536, 00:12:55.725 "uuid": "92f73776-ada2-4df7-9f7e-c77db6daed49", 00:12:55.725 "assigned_rate_limits": { 00:12:55.725 "rw_ios_per_sec": 0, 00:12:55.725 "rw_mbytes_per_sec": 0, 00:12:55.725 "r_mbytes_per_sec": 0, 00:12:55.725 "w_mbytes_per_sec": 0 00:12:55.725 }, 00:12:55.725 "claimed": true, 00:12:55.725 "claim_type": "exclusive_write", 00:12:55.725 "zoned": false, 00:12:55.725 "supported_io_types": { 00:12:55.725 "read": true, 00:12:55.725 "write": true, 00:12:55.725 "unmap": true, 00:12:55.725 "flush": true, 00:12:55.725 "reset": true, 00:12:55.725 "nvme_admin": false, 00:12:55.725 "nvme_io": false, 00:12:55.725 "nvme_io_md": false, 00:12:55.725 "write_zeroes": true, 00:12:55.725 "zcopy": true, 00:12:55.725 "get_zone_info": false, 00:12:55.725 "zone_management": false, 00:12:55.725 "zone_append": false, 00:12:55.725 "compare": false, 00:12:55.725 "compare_and_write": false, 00:12:55.725 "abort": true, 00:12:55.725 "seek_hole": false, 00:12:55.725 "seek_data": false, 00:12:55.725 "copy": true, 00:12:55.725 "nvme_iov_md": false 00:12:55.725 }, 00:12:55.725 "memory_domains": [ 00:12:55.725 { 00:12:55.725 "dma_device_id": "system", 00:12:55.725 "dma_device_type": 1 00:12:55.725 }, 00:12:55.725 { 00:12:55.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.725 "dma_device_type": 2 00:12:55.725 } 00:12:55.725 ], 00:12:55.725 "driver_specific": {} 00:12:55.725 } 00:12:55.725 ] 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.725 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.725 "name": "Existed_Raid", 00:12:55.725 "uuid": "10515190-f495-4d42-9ac0-891dfa207a2e", 00:12:55.725 "strip_size_kb": 0, 00:12:55.725 "state": "configuring", 00:12:55.725 "raid_level": "raid1", 00:12:55.725 "superblock": true, 00:12:55.725 "num_base_bdevs": 4, 00:12:55.725 "num_base_bdevs_discovered": 2, 00:12:55.725 "num_base_bdevs_operational": 4, 00:12:55.725 "base_bdevs_list": [ 00:12:55.725 { 00:12:55.725 "name": "BaseBdev1", 00:12:55.725 "uuid": "6fabc966-4d09-4409-a473-e693bacd536f", 00:12:55.725 "is_configured": true, 00:12:55.725 "data_offset": 2048, 00:12:55.726 "data_size": 63488 00:12:55.726 }, 00:12:55.726 { 00:12:55.726 "name": "BaseBdev2", 00:12:55.726 "uuid": "92f73776-ada2-4df7-9f7e-c77db6daed49", 00:12:55.726 "is_configured": true, 00:12:55.726 "data_offset": 2048, 00:12:55.726 "data_size": 63488 00:12:55.726 }, 00:12:55.726 { 00:12:55.726 "name": "BaseBdev3", 00:12:55.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.726 "is_configured": false, 00:12:55.726 "data_offset": 0, 00:12:55.726 "data_size": 0 00:12:55.726 }, 00:12:55.726 { 00:12:55.726 "name": "BaseBdev4", 00:12:55.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.726 "is_configured": false, 00:12:55.726 "data_offset": 0, 00:12:55.726 "data_size": 0 00:12:55.726 } 00:12:55.726 ] 00:12:55.726 }' 00:12:55.726 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.726 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.312 [2024-11-05 03:24:09.761940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.312 BaseBdev3 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.312 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.312 [ 00:12:56.312 { 00:12:56.312 "name": "BaseBdev3", 00:12:56.312 "aliases": [ 00:12:56.312 "94461244-e689-46ed-9297-3bb929084bc4" 00:12:56.312 ], 00:12:56.312 "product_name": "Malloc disk", 00:12:56.312 "block_size": 512, 00:12:56.312 "num_blocks": 65536, 00:12:56.312 "uuid": "94461244-e689-46ed-9297-3bb929084bc4", 00:12:56.312 "assigned_rate_limits": { 00:12:56.312 "rw_ios_per_sec": 0, 00:12:56.312 "rw_mbytes_per_sec": 0, 00:12:56.312 "r_mbytes_per_sec": 0, 00:12:56.312 "w_mbytes_per_sec": 0 00:12:56.312 }, 00:12:56.312 "claimed": true, 00:12:56.312 "claim_type": "exclusive_write", 00:12:56.312 "zoned": false, 00:12:56.312 "supported_io_types": { 00:12:56.312 "read": true, 00:12:56.312 "write": true, 00:12:56.312 "unmap": true, 00:12:56.312 "flush": true, 00:12:56.312 "reset": true, 00:12:56.312 "nvme_admin": false, 00:12:56.312 "nvme_io": false, 00:12:56.312 "nvme_io_md": false, 00:12:56.312 "write_zeroes": true, 00:12:56.312 "zcopy": true, 00:12:56.312 "get_zone_info": false, 00:12:56.312 "zone_management": false, 00:12:56.312 "zone_append": false, 00:12:56.312 "compare": false, 00:12:56.312 "compare_and_write": false, 00:12:56.312 "abort": true, 00:12:56.312 "seek_hole": false, 00:12:56.312 "seek_data": false, 00:12:56.312 "copy": true, 00:12:56.312 "nvme_iov_md": false 00:12:56.312 }, 00:12:56.312 "memory_domains": [ 00:12:56.312 { 00:12:56.312 "dma_device_id": "system", 00:12:56.312 "dma_device_type": 1 00:12:56.312 }, 00:12:56.313 { 00:12:56.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.313 "dma_device_type": 2 00:12:56.313 } 00:12:56.313 ], 00:12:56.313 "driver_specific": {} 00:12:56.313 } 00:12:56.313 ] 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.313 "name": "Existed_Raid", 00:12:56.313 "uuid": "10515190-f495-4d42-9ac0-891dfa207a2e", 00:12:56.313 "strip_size_kb": 0, 00:12:56.313 "state": "configuring", 00:12:56.313 "raid_level": "raid1", 00:12:56.313 "superblock": true, 00:12:56.313 "num_base_bdevs": 4, 00:12:56.313 "num_base_bdevs_discovered": 3, 00:12:56.313 "num_base_bdevs_operational": 4, 00:12:56.313 "base_bdevs_list": [ 00:12:56.313 { 00:12:56.313 "name": "BaseBdev1", 00:12:56.313 "uuid": "6fabc966-4d09-4409-a473-e693bacd536f", 00:12:56.313 "is_configured": true, 00:12:56.313 "data_offset": 2048, 00:12:56.313 "data_size": 63488 00:12:56.313 }, 00:12:56.313 { 00:12:56.313 "name": "BaseBdev2", 00:12:56.313 "uuid": "92f73776-ada2-4df7-9f7e-c77db6daed49", 00:12:56.313 "is_configured": true, 00:12:56.313 "data_offset": 2048, 00:12:56.313 "data_size": 63488 00:12:56.313 }, 00:12:56.313 { 00:12:56.313 "name": "BaseBdev3", 00:12:56.313 "uuid": "94461244-e689-46ed-9297-3bb929084bc4", 00:12:56.313 "is_configured": true, 00:12:56.313 "data_offset": 2048, 00:12:56.313 "data_size": 63488 00:12:56.313 }, 00:12:56.313 { 00:12:56.313 "name": "BaseBdev4", 00:12:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.313 "is_configured": false, 00:12:56.313 "data_offset": 0, 00:12:56.313 "data_size": 0 00:12:56.313 } 00:12:56.313 ] 00:12:56.313 }' 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.313 03:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.881 [2024-11-05 03:24:10.358171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.881 [2024-11-05 03:24:10.358538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:56.881 [2024-11-05 03:24:10.358557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.881 BaseBdev4 00:12:56.881 [2024-11-05 03:24:10.358929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:56.881 [2024-11-05 03:24:10.359132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:56.881 [2024-11-05 03:24:10.359170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:56.881 [2024-11-05 03:24:10.359343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.881 [ 00:12:56.881 { 00:12:56.881 "name": "BaseBdev4", 00:12:56.881 "aliases": [ 00:12:56.881 "ad797613-de3e-4e7a-bb4c-b275bc0178d0" 00:12:56.881 ], 00:12:56.881 "product_name": "Malloc disk", 00:12:56.881 "block_size": 512, 00:12:56.881 "num_blocks": 65536, 00:12:56.881 "uuid": "ad797613-de3e-4e7a-bb4c-b275bc0178d0", 00:12:56.881 "assigned_rate_limits": { 00:12:56.881 "rw_ios_per_sec": 0, 00:12:56.881 "rw_mbytes_per_sec": 0, 00:12:56.881 "r_mbytes_per_sec": 0, 00:12:56.881 "w_mbytes_per_sec": 0 00:12:56.881 }, 00:12:56.881 "claimed": true, 00:12:56.881 "claim_type": "exclusive_write", 00:12:56.881 "zoned": false, 00:12:56.881 "supported_io_types": { 00:12:56.881 "read": true, 00:12:56.881 "write": true, 00:12:56.881 "unmap": true, 00:12:56.881 "flush": true, 00:12:56.881 "reset": true, 00:12:56.881 "nvme_admin": false, 00:12:56.881 "nvme_io": false, 00:12:56.881 "nvme_io_md": false, 00:12:56.881 "write_zeroes": true, 00:12:56.881 "zcopy": true, 00:12:56.881 "get_zone_info": false, 00:12:56.881 "zone_management": false, 00:12:56.881 "zone_append": false, 00:12:56.881 "compare": false, 00:12:56.881 "compare_and_write": false, 00:12:56.881 "abort": true, 00:12:56.881 "seek_hole": false, 00:12:56.881 "seek_data": false, 00:12:56.881 "copy": true, 00:12:56.881 "nvme_iov_md": false 00:12:56.881 }, 00:12:56.881 "memory_domains": [ 00:12:56.881 { 00:12:56.881 "dma_device_id": "system", 00:12:56.881 "dma_device_type": 1 00:12:56.881 }, 00:12:56.881 { 00:12:56.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.881 "dma_device_type": 2 00:12:56.881 } 00:12:56.881 ], 00:12:56.881 "driver_specific": {} 00:12:56.881 } 00:12:56.881 ] 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.881 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.882 "name": "Existed_Raid", 00:12:56.882 "uuid": "10515190-f495-4d42-9ac0-891dfa207a2e", 00:12:56.882 "strip_size_kb": 0, 00:12:56.882 "state": "online", 00:12:56.882 "raid_level": "raid1", 00:12:56.882 "superblock": true, 00:12:56.882 "num_base_bdevs": 4, 00:12:56.882 "num_base_bdevs_discovered": 4, 00:12:56.882 "num_base_bdevs_operational": 4, 00:12:56.882 "base_bdevs_list": [ 00:12:56.882 { 00:12:56.882 "name": "BaseBdev1", 00:12:56.882 "uuid": "6fabc966-4d09-4409-a473-e693bacd536f", 00:12:56.882 "is_configured": true, 00:12:56.882 "data_offset": 2048, 00:12:56.882 "data_size": 63488 00:12:56.882 }, 00:12:56.882 { 00:12:56.882 "name": "BaseBdev2", 00:12:56.882 "uuid": "92f73776-ada2-4df7-9f7e-c77db6daed49", 00:12:56.882 "is_configured": true, 00:12:56.882 "data_offset": 2048, 00:12:56.882 "data_size": 63488 00:12:56.882 }, 00:12:56.882 { 00:12:56.882 "name": "BaseBdev3", 00:12:56.882 "uuid": "94461244-e689-46ed-9297-3bb929084bc4", 00:12:56.882 "is_configured": true, 00:12:56.882 "data_offset": 2048, 00:12:56.882 "data_size": 63488 00:12:56.882 }, 00:12:56.882 { 00:12:56.882 "name": "BaseBdev4", 00:12:56.882 "uuid": "ad797613-de3e-4e7a-bb4c-b275bc0178d0", 00:12:56.882 "is_configured": true, 00:12:56.882 "data_offset": 2048, 00:12:56.882 "data_size": 63488 00:12:56.882 } 00:12:56.882 ] 00:12:56.882 }' 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.882 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.450 [2024-11-05 03:24:10.918894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.450 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.450 "name": "Existed_Raid", 00:12:57.450 "aliases": [ 00:12:57.450 "10515190-f495-4d42-9ac0-891dfa207a2e" 00:12:57.450 ], 00:12:57.450 "product_name": "Raid Volume", 00:12:57.450 "block_size": 512, 00:12:57.450 "num_blocks": 63488, 00:12:57.450 "uuid": "10515190-f495-4d42-9ac0-891dfa207a2e", 00:12:57.450 "assigned_rate_limits": { 00:12:57.450 "rw_ios_per_sec": 0, 00:12:57.450 "rw_mbytes_per_sec": 0, 00:12:57.450 "r_mbytes_per_sec": 0, 00:12:57.450 "w_mbytes_per_sec": 0 00:12:57.450 }, 00:12:57.450 "claimed": false, 00:12:57.450 "zoned": false, 00:12:57.450 "supported_io_types": { 00:12:57.450 "read": true, 00:12:57.450 "write": true, 00:12:57.450 "unmap": false, 00:12:57.450 "flush": false, 00:12:57.450 "reset": true, 00:12:57.450 "nvme_admin": false, 00:12:57.450 "nvme_io": false, 00:12:57.450 "nvme_io_md": false, 00:12:57.450 "write_zeroes": true, 00:12:57.450 "zcopy": false, 00:12:57.450 "get_zone_info": false, 00:12:57.450 "zone_management": false, 00:12:57.450 "zone_append": false, 00:12:57.450 "compare": false, 00:12:57.450 "compare_and_write": false, 00:12:57.450 "abort": false, 00:12:57.450 "seek_hole": false, 00:12:57.450 "seek_data": false, 00:12:57.450 "copy": false, 00:12:57.450 "nvme_iov_md": false 00:12:57.450 }, 00:12:57.450 "memory_domains": [ 00:12:57.450 { 00:12:57.450 "dma_device_id": "system", 00:12:57.450 "dma_device_type": 1 00:12:57.450 }, 00:12:57.450 { 00:12:57.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.450 "dma_device_type": 2 00:12:57.450 }, 00:12:57.450 { 00:12:57.450 "dma_device_id": "system", 00:12:57.450 "dma_device_type": 1 00:12:57.450 }, 00:12:57.451 { 00:12:57.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.451 "dma_device_type": 2 00:12:57.451 }, 00:12:57.451 { 00:12:57.451 "dma_device_id": "system", 00:12:57.451 "dma_device_type": 1 00:12:57.451 }, 00:12:57.451 { 00:12:57.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.451 "dma_device_type": 2 00:12:57.451 }, 00:12:57.451 { 00:12:57.451 "dma_device_id": "system", 00:12:57.451 "dma_device_type": 1 00:12:57.451 }, 00:12:57.451 { 00:12:57.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.451 "dma_device_type": 2 00:12:57.451 } 00:12:57.451 ], 00:12:57.451 "driver_specific": { 00:12:57.451 "raid": { 00:12:57.451 "uuid": "10515190-f495-4d42-9ac0-891dfa207a2e", 00:12:57.451 "strip_size_kb": 0, 00:12:57.451 "state": "online", 00:12:57.451 "raid_level": "raid1", 00:12:57.451 "superblock": true, 00:12:57.451 "num_base_bdevs": 4, 00:12:57.451 "num_base_bdevs_discovered": 4, 00:12:57.451 "num_base_bdevs_operational": 4, 00:12:57.451 "base_bdevs_list": [ 00:12:57.451 { 00:12:57.451 "name": "BaseBdev1", 00:12:57.451 "uuid": "6fabc966-4d09-4409-a473-e693bacd536f", 00:12:57.451 "is_configured": true, 00:12:57.451 "data_offset": 2048, 00:12:57.451 "data_size": 63488 00:12:57.451 }, 00:12:57.451 { 00:12:57.451 "name": "BaseBdev2", 00:12:57.451 "uuid": "92f73776-ada2-4df7-9f7e-c77db6daed49", 00:12:57.451 "is_configured": true, 00:12:57.451 "data_offset": 2048, 00:12:57.451 "data_size": 63488 00:12:57.451 }, 00:12:57.451 { 00:12:57.451 "name": "BaseBdev3", 00:12:57.451 "uuid": "94461244-e689-46ed-9297-3bb929084bc4", 00:12:57.451 "is_configured": true, 00:12:57.451 "data_offset": 2048, 00:12:57.451 "data_size": 63488 00:12:57.451 }, 00:12:57.451 { 00:12:57.451 "name": "BaseBdev4", 00:12:57.451 "uuid": "ad797613-de3e-4e7a-bb4c-b275bc0178d0", 00:12:57.451 "is_configured": true, 00:12:57.451 "data_offset": 2048, 00:12:57.451 "data_size": 63488 00:12:57.451 } 00:12:57.451 ] 00:12:57.451 } 00:12:57.451 } 00:12:57.451 }' 00:12:57.451 03:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:57.451 BaseBdev2 00:12:57.451 BaseBdev3 00:12:57.451 BaseBdev4' 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.451 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.710 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.710 [2024-11-05 03:24:11.290583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.970 "name": "Existed_Raid", 00:12:57.970 "uuid": "10515190-f495-4d42-9ac0-891dfa207a2e", 00:12:57.970 "strip_size_kb": 0, 00:12:57.970 "state": "online", 00:12:57.970 "raid_level": "raid1", 00:12:57.970 "superblock": true, 00:12:57.970 "num_base_bdevs": 4, 00:12:57.970 "num_base_bdevs_discovered": 3, 00:12:57.970 "num_base_bdevs_operational": 3, 00:12:57.970 "base_bdevs_list": [ 00:12:57.970 { 00:12:57.970 "name": null, 00:12:57.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.970 "is_configured": false, 00:12:57.970 "data_offset": 0, 00:12:57.970 "data_size": 63488 00:12:57.970 }, 00:12:57.970 { 00:12:57.970 "name": "BaseBdev2", 00:12:57.970 "uuid": "92f73776-ada2-4df7-9f7e-c77db6daed49", 00:12:57.970 "is_configured": true, 00:12:57.970 "data_offset": 2048, 00:12:57.970 "data_size": 63488 00:12:57.970 }, 00:12:57.970 { 00:12:57.970 "name": "BaseBdev3", 00:12:57.970 "uuid": "94461244-e689-46ed-9297-3bb929084bc4", 00:12:57.970 "is_configured": true, 00:12:57.970 "data_offset": 2048, 00:12:57.970 "data_size": 63488 00:12:57.970 }, 00:12:57.970 { 00:12:57.970 "name": "BaseBdev4", 00:12:57.970 "uuid": "ad797613-de3e-4e7a-bb4c-b275bc0178d0", 00:12:57.970 "is_configured": true, 00:12:57.970 "data_offset": 2048, 00:12:57.970 "data_size": 63488 00:12:57.970 } 00:12:57.970 ] 00:12:57.970 }' 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.970 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.538 03:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.538 [2024-11-05 03:24:11.953749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:58.538 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.539 [2024-11-05 03:24:12.096250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.539 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.798 [2024-11-05 03:24:12.232117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:58.798 [2024-11-05 03:24:12.232227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.798 [2024-11-05 03:24:12.309981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.798 [2024-11-05 03:24:12.310331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.798 [2024-11-05 03:24:12.310365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:58.798 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.799 BaseBdev2 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.799 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.799 [ 00:12:58.799 { 00:12:58.799 "name": "BaseBdev2", 00:12:58.799 "aliases": [ 00:12:58.799 "2cfd1553-d6da-4767-a82c-d79f58f0340a" 00:12:58.799 ], 00:12:58.799 "product_name": "Malloc disk", 00:12:58.799 "block_size": 512, 00:12:58.799 "num_blocks": 65536, 00:12:58.799 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:12:58.799 "assigned_rate_limits": { 00:12:58.799 "rw_ios_per_sec": 0, 00:12:58.799 "rw_mbytes_per_sec": 0, 00:12:58.799 "r_mbytes_per_sec": 0, 00:12:58.799 "w_mbytes_per_sec": 0 00:12:58.799 }, 00:12:58.799 "claimed": false, 00:12:58.799 "zoned": false, 00:12:58.799 "supported_io_types": { 00:12:58.799 "read": true, 00:12:58.799 "write": true, 00:12:58.799 "unmap": true, 00:12:58.799 "flush": true, 00:12:58.799 "reset": true, 00:12:58.799 "nvme_admin": false, 00:12:58.799 "nvme_io": false, 00:12:58.799 "nvme_io_md": false, 00:12:58.799 "write_zeroes": true, 00:12:58.799 "zcopy": true, 00:12:58.799 "get_zone_info": false, 00:12:58.799 "zone_management": false, 00:12:58.799 "zone_append": false, 00:12:58.799 "compare": false, 00:12:59.058 "compare_and_write": false, 00:12:59.058 "abort": true, 00:12:59.058 "seek_hole": false, 00:12:59.058 "seek_data": false, 00:12:59.058 "copy": true, 00:12:59.058 "nvme_iov_md": false 00:12:59.058 }, 00:12:59.058 "memory_domains": [ 00:12:59.058 { 00:12:59.058 "dma_device_id": "system", 00:12:59.058 "dma_device_type": 1 00:12:59.058 }, 00:12:59.058 { 00:12:59.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.058 "dma_device_type": 2 00:12:59.058 } 00:12:59.058 ], 00:12:59.058 "driver_specific": {} 00:12:59.058 } 00:12:59.058 ] 00:12:59.058 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.058 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.058 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 BaseBdev3 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 [ 00:12:59.059 { 00:12:59.059 "name": "BaseBdev3", 00:12:59.059 "aliases": [ 00:12:59.059 "c43bcc2d-8049-41f2-924d-3367dc88df27" 00:12:59.059 ], 00:12:59.059 "product_name": "Malloc disk", 00:12:59.059 "block_size": 512, 00:12:59.059 "num_blocks": 65536, 00:12:59.059 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:12:59.059 "assigned_rate_limits": { 00:12:59.059 "rw_ios_per_sec": 0, 00:12:59.059 "rw_mbytes_per_sec": 0, 00:12:59.059 "r_mbytes_per_sec": 0, 00:12:59.059 "w_mbytes_per_sec": 0 00:12:59.059 }, 00:12:59.059 "claimed": false, 00:12:59.059 "zoned": false, 00:12:59.059 "supported_io_types": { 00:12:59.059 "read": true, 00:12:59.059 "write": true, 00:12:59.059 "unmap": true, 00:12:59.059 "flush": true, 00:12:59.059 "reset": true, 00:12:59.059 "nvme_admin": false, 00:12:59.059 "nvme_io": false, 00:12:59.059 "nvme_io_md": false, 00:12:59.059 "write_zeroes": true, 00:12:59.059 "zcopy": true, 00:12:59.059 "get_zone_info": false, 00:12:59.059 "zone_management": false, 00:12:59.059 "zone_append": false, 00:12:59.059 "compare": false, 00:12:59.059 "compare_and_write": false, 00:12:59.059 "abort": true, 00:12:59.059 "seek_hole": false, 00:12:59.059 "seek_data": false, 00:12:59.059 "copy": true, 00:12:59.059 "nvme_iov_md": false 00:12:59.059 }, 00:12:59.059 "memory_domains": [ 00:12:59.059 { 00:12:59.059 "dma_device_id": "system", 00:12:59.059 "dma_device_type": 1 00:12:59.059 }, 00:12:59.059 { 00:12:59.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.059 "dma_device_type": 2 00:12:59.059 } 00:12:59.059 ], 00:12:59.059 "driver_specific": {} 00:12:59.059 } 00:12:59.059 ] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 BaseBdev4 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 [ 00:12:59.059 { 00:12:59.059 "name": "BaseBdev4", 00:12:59.059 "aliases": [ 00:12:59.059 "7b469c83-1717-4fbd-8f4f-448159b9e631" 00:12:59.059 ], 00:12:59.059 "product_name": "Malloc disk", 00:12:59.059 "block_size": 512, 00:12:59.059 "num_blocks": 65536, 00:12:59.059 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:12:59.059 "assigned_rate_limits": { 00:12:59.059 "rw_ios_per_sec": 0, 00:12:59.059 "rw_mbytes_per_sec": 0, 00:12:59.059 "r_mbytes_per_sec": 0, 00:12:59.059 "w_mbytes_per_sec": 0 00:12:59.059 }, 00:12:59.059 "claimed": false, 00:12:59.059 "zoned": false, 00:12:59.059 "supported_io_types": { 00:12:59.059 "read": true, 00:12:59.059 "write": true, 00:12:59.059 "unmap": true, 00:12:59.059 "flush": true, 00:12:59.059 "reset": true, 00:12:59.059 "nvme_admin": false, 00:12:59.059 "nvme_io": false, 00:12:59.059 "nvme_io_md": false, 00:12:59.059 "write_zeroes": true, 00:12:59.059 "zcopy": true, 00:12:59.059 "get_zone_info": false, 00:12:59.059 "zone_management": false, 00:12:59.059 "zone_append": false, 00:12:59.059 "compare": false, 00:12:59.059 "compare_and_write": false, 00:12:59.059 "abort": true, 00:12:59.059 "seek_hole": false, 00:12:59.059 "seek_data": false, 00:12:59.059 "copy": true, 00:12:59.059 "nvme_iov_md": false 00:12:59.059 }, 00:12:59.059 "memory_domains": [ 00:12:59.059 { 00:12:59.059 "dma_device_id": "system", 00:12:59.059 "dma_device_type": 1 00:12:59.059 }, 00:12:59.059 { 00:12:59.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.059 "dma_device_type": 2 00:12:59.059 } 00:12:59.059 ], 00:12:59.059 "driver_specific": {} 00:12:59.059 } 00:12:59.059 ] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 [2024-11-05 03:24:12.595074] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.059 [2024-11-05 03:24:12.595288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.059 [2024-11-05 03:24:12.595418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.059 [2024-11-05 03:24:12.597785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.059 [2024-11-05 03:24:12.598082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.059 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.059 "name": "Existed_Raid", 00:12:59.059 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:12:59.060 "strip_size_kb": 0, 00:12:59.060 "state": "configuring", 00:12:59.060 "raid_level": "raid1", 00:12:59.060 "superblock": true, 00:12:59.060 "num_base_bdevs": 4, 00:12:59.060 "num_base_bdevs_discovered": 3, 00:12:59.060 "num_base_bdevs_operational": 4, 00:12:59.060 "base_bdevs_list": [ 00:12:59.060 { 00:12:59.060 "name": "BaseBdev1", 00:12:59.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.060 "is_configured": false, 00:12:59.060 "data_offset": 0, 00:12:59.060 "data_size": 0 00:12:59.060 }, 00:12:59.060 { 00:12:59.060 "name": "BaseBdev2", 00:12:59.060 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:12:59.060 "is_configured": true, 00:12:59.060 "data_offset": 2048, 00:12:59.060 "data_size": 63488 00:12:59.060 }, 00:12:59.060 { 00:12:59.060 "name": "BaseBdev3", 00:12:59.060 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:12:59.060 "is_configured": true, 00:12:59.060 "data_offset": 2048, 00:12:59.060 "data_size": 63488 00:12:59.060 }, 00:12:59.060 { 00:12:59.060 "name": "BaseBdev4", 00:12:59.060 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:12:59.060 "is_configured": true, 00:12:59.060 "data_offset": 2048, 00:12:59.060 "data_size": 63488 00:12:59.060 } 00:12:59.060 ] 00:12:59.060 }' 00:12:59.060 03:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.060 03:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.627 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 [2024-11-05 03:24:13.123189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.628 "name": "Existed_Raid", 00:12:59.628 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:12:59.628 "strip_size_kb": 0, 00:12:59.628 "state": "configuring", 00:12:59.628 "raid_level": "raid1", 00:12:59.628 "superblock": true, 00:12:59.628 "num_base_bdevs": 4, 00:12:59.628 "num_base_bdevs_discovered": 2, 00:12:59.628 "num_base_bdevs_operational": 4, 00:12:59.628 "base_bdevs_list": [ 00:12:59.628 { 00:12:59.628 "name": "BaseBdev1", 00:12:59.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.628 "is_configured": false, 00:12:59.628 "data_offset": 0, 00:12:59.628 "data_size": 0 00:12:59.628 }, 00:12:59.628 { 00:12:59.628 "name": null, 00:12:59.628 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:12:59.628 "is_configured": false, 00:12:59.628 "data_offset": 0, 00:12:59.628 "data_size": 63488 00:12:59.628 }, 00:12:59.628 { 00:12:59.628 "name": "BaseBdev3", 00:12:59.628 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:12:59.628 "is_configured": true, 00:12:59.628 "data_offset": 2048, 00:12:59.628 "data_size": 63488 00:12:59.628 }, 00:12:59.628 { 00:12:59.628 "name": "BaseBdev4", 00:12:59.628 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:12:59.628 "is_configured": true, 00:12:59.628 "data_offset": 2048, 00:12:59.628 "data_size": 63488 00:12:59.628 } 00:12:59.628 ] 00:12:59.628 }' 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.628 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.195 [2024-11-05 03:24:13.746833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.195 BaseBdev1 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.195 [ 00:13:00.195 { 00:13:00.195 "name": "BaseBdev1", 00:13:00.195 "aliases": [ 00:13:00.195 "6fdd2318-ad47-4a78-98b7-e639b2dd0de1" 00:13:00.195 ], 00:13:00.195 "product_name": "Malloc disk", 00:13:00.195 "block_size": 512, 00:13:00.195 "num_blocks": 65536, 00:13:00.195 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:00.195 "assigned_rate_limits": { 00:13:00.195 "rw_ios_per_sec": 0, 00:13:00.195 "rw_mbytes_per_sec": 0, 00:13:00.195 "r_mbytes_per_sec": 0, 00:13:00.195 "w_mbytes_per_sec": 0 00:13:00.195 }, 00:13:00.195 "claimed": true, 00:13:00.195 "claim_type": "exclusive_write", 00:13:00.195 "zoned": false, 00:13:00.195 "supported_io_types": { 00:13:00.195 "read": true, 00:13:00.195 "write": true, 00:13:00.195 "unmap": true, 00:13:00.195 "flush": true, 00:13:00.195 "reset": true, 00:13:00.195 "nvme_admin": false, 00:13:00.195 "nvme_io": false, 00:13:00.195 "nvme_io_md": false, 00:13:00.195 "write_zeroes": true, 00:13:00.195 "zcopy": true, 00:13:00.195 "get_zone_info": false, 00:13:00.195 "zone_management": false, 00:13:00.195 "zone_append": false, 00:13:00.195 "compare": false, 00:13:00.195 "compare_and_write": false, 00:13:00.195 "abort": true, 00:13:00.195 "seek_hole": false, 00:13:00.195 "seek_data": false, 00:13:00.195 "copy": true, 00:13:00.195 "nvme_iov_md": false 00:13:00.195 }, 00:13:00.195 "memory_domains": [ 00:13:00.195 { 00:13:00.195 "dma_device_id": "system", 00:13:00.195 "dma_device_type": 1 00:13:00.195 }, 00:13:00.195 { 00:13:00.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.195 "dma_device_type": 2 00:13:00.195 } 00:13:00.195 ], 00:13:00.195 "driver_specific": {} 00:13:00.195 } 00:13:00.195 ] 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.195 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.454 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.454 "name": "Existed_Raid", 00:13:00.454 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:00.454 "strip_size_kb": 0, 00:13:00.454 "state": "configuring", 00:13:00.454 "raid_level": "raid1", 00:13:00.454 "superblock": true, 00:13:00.454 "num_base_bdevs": 4, 00:13:00.454 "num_base_bdevs_discovered": 3, 00:13:00.454 "num_base_bdevs_operational": 4, 00:13:00.454 "base_bdevs_list": [ 00:13:00.454 { 00:13:00.454 "name": "BaseBdev1", 00:13:00.454 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:00.454 "is_configured": true, 00:13:00.454 "data_offset": 2048, 00:13:00.454 "data_size": 63488 00:13:00.454 }, 00:13:00.454 { 00:13:00.454 "name": null, 00:13:00.454 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:13:00.454 "is_configured": false, 00:13:00.454 "data_offset": 0, 00:13:00.454 "data_size": 63488 00:13:00.454 }, 00:13:00.454 { 00:13:00.454 "name": "BaseBdev3", 00:13:00.454 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:13:00.454 "is_configured": true, 00:13:00.454 "data_offset": 2048, 00:13:00.454 "data_size": 63488 00:13:00.454 }, 00:13:00.454 { 00:13:00.454 "name": "BaseBdev4", 00:13:00.454 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:13:00.454 "is_configured": true, 00:13:00.454 "data_offset": 2048, 00:13:00.454 "data_size": 63488 00:13:00.454 } 00:13:00.454 ] 00:13:00.454 }' 00:13:00.454 03:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.454 03:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.721 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.721 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.721 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.721 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.721 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.991 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:00.991 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 [2024-11-05 03:24:14.359452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.992 "name": "Existed_Raid", 00:13:00.992 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:00.992 "strip_size_kb": 0, 00:13:00.992 "state": "configuring", 00:13:00.992 "raid_level": "raid1", 00:13:00.992 "superblock": true, 00:13:00.992 "num_base_bdevs": 4, 00:13:00.992 "num_base_bdevs_discovered": 2, 00:13:00.992 "num_base_bdevs_operational": 4, 00:13:00.992 "base_bdevs_list": [ 00:13:00.992 { 00:13:00.992 "name": "BaseBdev1", 00:13:00.992 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:00.992 "is_configured": true, 00:13:00.992 "data_offset": 2048, 00:13:00.992 "data_size": 63488 00:13:00.992 }, 00:13:00.992 { 00:13:00.992 "name": null, 00:13:00.992 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:13:00.992 "is_configured": false, 00:13:00.992 "data_offset": 0, 00:13:00.992 "data_size": 63488 00:13:00.992 }, 00:13:00.992 { 00:13:00.992 "name": null, 00:13:00.992 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:13:00.992 "is_configured": false, 00:13:00.992 "data_offset": 0, 00:13:00.992 "data_size": 63488 00:13:00.992 }, 00:13:00.992 { 00:13:00.992 "name": "BaseBdev4", 00:13:00.992 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:13:00.992 "is_configured": true, 00:13:00.992 "data_offset": 2048, 00:13:00.992 "data_size": 63488 00:13:00.992 } 00:13:00.992 ] 00:13:00.992 }' 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.992 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 [2024-11-05 03:24:14.955612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 03:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.562 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.562 "name": "Existed_Raid", 00:13:01.562 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:01.562 "strip_size_kb": 0, 00:13:01.562 "state": "configuring", 00:13:01.562 "raid_level": "raid1", 00:13:01.562 "superblock": true, 00:13:01.562 "num_base_bdevs": 4, 00:13:01.562 "num_base_bdevs_discovered": 3, 00:13:01.562 "num_base_bdevs_operational": 4, 00:13:01.562 "base_bdevs_list": [ 00:13:01.562 { 00:13:01.562 "name": "BaseBdev1", 00:13:01.562 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:01.562 "is_configured": true, 00:13:01.562 "data_offset": 2048, 00:13:01.562 "data_size": 63488 00:13:01.562 }, 00:13:01.562 { 00:13:01.562 "name": null, 00:13:01.562 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:13:01.562 "is_configured": false, 00:13:01.562 "data_offset": 0, 00:13:01.562 "data_size": 63488 00:13:01.562 }, 00:13:01.562 { 00:13:01.562 "name": "BaseBdev3", 00:13:01.562 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:13:01.562 "is_configured": true, 00:13:01.562 "data_offset": 2048, 00:13:01.562 "data_size": 63488 00:13:01.562 }, 00:13:01.562 { 00:13:01.562 "name": "BaseBdev4", 00:13:01.562 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:13:01.562 "is_configured": true, 00:13:01.562 "data_offset": 2048, 00:13:01.562 "data_size": 63488 00:13:01.562 } 00:13:01.562 ] 00:13:01.562 }' 00:13:01.562 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.562 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.131 [2024-11-05 03:24:15.539824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.131 "name": "Existed_Raid", 00:13:02.131 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:02.131 "strip_size_kb": 0, 00:13:02.131 "state": "configuring", 00:13:02.131 "raid_level": "raid1", 00:13:02.131 "superblock": true, 00:13:02.131 "num_base_bdevs": 4, 00:13:02.131 "num_base_bdevs_discovered": 2, 00:13:02.131 "num_base_bdevs_operational": 4, 00:13:02.131 "base_bdevs_list": [ 00:13:02.131 { 00:13:02.131 "name": null, 00:13:02.131 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:02.131 "is_configured": false, 00:13:02.131 "data_offset": 0, 00:13:02.131 "data_size": 63488 00:13:02.131 }, 00:13:02.131 { 00:13:02.131 "name": null, 00:13:02.131 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:13:02.131 "is_configured": false, 00:13:02.131 "data_offset": 0, 00:13:02.131 "data_size": 63488 00:13:02.131 }, 00:13:02.131 { 00:13:02.131 "name": "BaseBdev3", 00:13:02.131 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:13:02.131 "is_configured": true, 00:13:02.131 "data_offset": 2048, 00:13:02.131 "data_size": 63488 00:13:02.131 }, 00:13:02.131 { 00:13:02.131 "name": "BaseBdev4", 00:13:02.131 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:13:02.131 "is_configured": true, 00:13:02.131 "data_offset": 2048, 00:13:02.131 "data_size": 63488 00:13:02.131 } 00:13:02.131 ] 00:13:02.131 }' 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.131 03:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.699 [2024-11-05 03:24:16.196740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.699 "name": "Existed_Raid", 00:13:02.699 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:02.699 "strip_size_kb": 0, 00:13:02.699 "state": "configuring", 00:13:02.699 "raid_level": "raid1", 00:13:02.699 "superblock": true, 00:13:02.699 "num_base_bdevs": 4, 00:13:02.699 "num_base_bdevs_discovered": 3, 00:13:02.699 "num_base_bdevs_operational": 4, 00:13:02.699 "base_bdevs_list": [ 00:13:02.699 { 00:13:02.699 "name": null, 00:13:02.699 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:02.699 "is_configured": false, 00:13:02.699 "data_offset": 0, 00:13:02.699 "data_size": 63488 00:13:02.699 }, 00:13:02.699 { 00:13:02.699 "name": "BaseBdev2", 00:13:02.699 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:13:02.699 "is_configured": true, 00:13:02.699 "data_offset": 2048, 00:13:02.699 "data_size": 63488 00:13:02.699 }, 00:13:02.699 { 00:13:02.699 "name": "BaseBdev3", 00:13:02.699 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:13:02.699 "is_configured": true, 00:13:02.699 "data_offset": 2048, 00:13:02.699 "data_size": 63488 00:13:02.699 }, 00:13:02.699 { 00:13:02.699 "name": "BaseBdev4", 00:13:02.699 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:13:02.699 "is_configured": true, 00:13:02.699 "data_offset": 2048, 00:13:02.699 "data_size": 63488 00:13:02.699 } 00:13:02.699 ] 00:13:02.699 }' 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.699 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6fdd2318-ad47-4a78-98b7-e639b2dd0de1 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.267 [2024-11-05 03:24:16.889458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:03.267 [2024-11-05 03:24:16.889786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:03.267 [2024-11-05 03:24:16.889811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.267 NewBaseBdev 00:13:03.267 [2024-11-05 03:24:16.890176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:03.267 [2024-11-05 03:24:16.890467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:03.267 [2024-11-05 03:24:16.890491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:03.267 [2024-11-05 03:24:16.890654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:03.267 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.526 [ 00:13:03.526 { 00:13:03.526 "name": "NewBaseBdev", 00:13:03.526 "aliases": [ 00:13:03.526 "6fdd2318-ad47-4a78-98b7-e639b2dd0de1" 00:13:03.526 ], 00:13:03.526 "product_name": "Malloc disk", 00:13:03.526 "block_size": 512, 00:13:03.526 "num_blocks": 65536, 00:13:03.526 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:03.526 "assigned_rate_limits": { 00:13:03.526 "rw_ios_per_sec": 0, 00:13:03.526 "rw_mbytes_per_sec": 0, 00:13:03.526 "r_mbytes_per_sec": 0, 00:13:03.526 "w_mbytes_per_sec": 0 00:13:03.526 }, 00:13:03.526 "claimed": true, 00:13:03.526 "claim_type": "exclusive_write", 00:13:03.526 "zoned": false, 00:13:03.526 "supported_io_types": { 00:13:03.526 "read": true, 00:13:03.526 "write": true, 00:13:03.526 "unmap": true, 00:13:03.526 "flush": true, 00:13:03.526 "reset": true, 00:13:03.526 "nvme_admin": false, 00:13:03.526 "nvme_io": false, 00:13:03.526 "nvme_io_md": false, 00:13:03.526 "write_zeroes": true, 00:13:03.526 "zcopy": true, 00:13:03.526 "get_zone_info": false, 00:13:03.526 "zone_management": false, 00:13:03.526 "zone_append": false, 00:13:03.526 "compare": false, 00:13:03.526 "compare_and_write": false, 00:13:03.526 "abort": true, 00:13:03.526 "seek_hole": false, 00:13:03.526 "seek_data": false, 00:13:03.526 "copy": true, 00:13:03.526 "nvme_iov_md": false 00:13:03.526 }, 00:13:03.526 "memory_domains": [ 00:13:03.526 { 00:13:03.526 "dma_device_id": "system", 00:13:03.526 "dma_device_type": 1 00:13:03.526 }, 00:13:03.526 { 00:13:03.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.526 "dma_device_type": 2 00:13:03.526 } 00:13:03.526 ], 00:13:03.526 "driver_specific": {} 00:13:03.526 } 00:13:03.526 ] 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.526 "name": "Existed_Raid", 00:13:03.526 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:03.526 "strip_size_kb": 0, 00:13:03.526 "state": "online", 00:13:03.526 "raid_level": "raid1", 00:13:03.526 "superblock": true, 00:13:03.526 "num_base_bdevs": 4, 00:13:03.526 "num_base_bdevs_discovered": 4, 00:13:03.526 "num_base_bdevs_operational": 4, 00:13:03.526 "base_bdevs_list": [ 00:13:03.526 { 00:13:03.526 "name": "NewBaseBdev", 00:13:03.526 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:03.526 "is_configured": true, 00:13:03.526 "data_offset": 2048, 00:13:03.526 "data_size": 63488 00:13:03.526 }, 00:13:03.526 { 00:13:03.526 "name": "BaseBdev2", 00:13:03.526 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:13:03.526 "is_configured": true, 00:13:03.526 "data_offset": 2048, 00:13:03.526 "data_size": 63488 00:13:03.526 }, 00:13:03.526 { 00:13:03.526 "name": "BaseBdev3", 00:13:03.526 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:13:03.526 "is_configured": true, 00:13:03.526 "data_offset": 2048, 00:13:03.526 "data_size": 63488 00:13:03.526 }, 00:13:03.526 { 00:13:03.526 "name": "BaseBdev4", 00:13:03.526 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:13:03.526 "is_configured": true, 00:13:03.526 "data_offset": 2048, 00:13:03.526 "data_size": 63488 00:13:03.526 } 00:13:03.526 ] 00:13:03.526 }' 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.526 03:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.094 [2024-11-05 03:24:17.454601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:04.094 "name": "Existed_Raid", 00:13:04.094 "aliases": [ 00:13:04.094 "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6" 00:13:04.094 ], 00:13:04.094 "product_name": "Raid Volume", 00:13:04.094 "block_size": 512, 00:13:04.094 "num_blocks": 63488, 00:13:04.094 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:04.094 "assigned_rate_limits": { 00:13:04.094 "rw_ios_per_sec": 0, 00:13:04.094 "rw_mbytes_per_sec": 0, 00:13:04.094 "r_mbytes_per_sec": 0, 00:13:04.094 "w_mbytes_per_sec": 0 00:13:04.094 }, 00:13:04.094 "claimed": false, 00:13:04.094 "zoned": false, 00:13:04.094 "supported_io_types": { 00:13:04.094 "read": true, 00:13:04.094 "write": true, 00:13:04.094 "unmap": false, 00:13:04.094 "flush": false, 00:13:04.094 "reset": true, 00:13:04.094 "nvme_admin": false, 00:13:04.094 "nvme_io": false, 00:13:04.094 "nvme_io_md": false, 00:13:04.094 "write_zeroes": true, 00:13:04.094 "zcopy": false, 00:13:04.094 "get_zone_info": false, 00:13:04.094 "zone_management": false, 00:13:04.094 "zone_append": false, 00:13:04.094 "compare": false, 00:13:04.094 "compare_and_write": false, 00:13:04.094 "abort": false, 00:13:04.094 "seek_hole": false, 00:13:04.094 "seek_data": false, 00:13:04.094 "copy": false, 00:13:04.094 "nvme_iov_md": false 00:13:04.094 }, 00:13:04.094 "memory_domains": [ 00:13:04.094 { 00:13:04.094 "dma_device_id": "system", 00:13:04.094 "dma_device_type": 1 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.094 "dma_device_type": 2 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "dma_device_id": "system", 00:13:04.094 "dma_device_type": 1 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.094 "dma_device_type": 2 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "dma_device_id": "system", 00:13:04.094 "dma_device_type": 1 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.094 "dma_device_type": 2 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "dma_device_id": "system", 00:13:04.094 "dma_device_type": 1 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.094 "dma_device_type": 2 00:13:04.094 } 00:13:04.094 ], 00:13:04.094 "driver_specific": { 00:13:04.094 "raid": { 00:13:04.094 "uuid": "27a62f56-b9bd-4a23-8ae9-d20bc00bfcb6", 00:13:04.094 "strip_size_kb": 0, 00:13:04.094 "state": "online", 00:13:04.094 "raid_level": "raid1", 00:13:04.094 "superblock": true, 00:13:04.094 "num_base_bdevs": 4, 00:13:04.094 "num_base_bdevs_discovered": 4, 00:13:04.094 "num_base_bdevs_operational": 4, 00:13:04.094 "base_bdevs_list": [ 00:13:04.094 { 00:13:04.094 "name": "NewBaseBdev", 00:13:04.094 "uuid": "6fdd2318-ad47-4a78-98b7-e639b2dd0de1", 00:13:04.094 "is_configured": true, 00:13:04.094 "data_offset": 2048, 00:13:04.094 "data_size": 63488 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "name": "BaseBdev2", 00:13:04.094 "uuid": "2cfd1553-d6da-4767-a82c-d79f58f0340a", 00:13:04.094 "is_configured": true, 00:13:04.094 "data_offset": 2048, 00:13:04.094 "data_size": 63488 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "name": "BaseBdev3", 00:13:04.094 "uuid": "c43bcc2d-8049-41f2-924d-3367dc88df27", 00:13:04.094 "is_configured": true, 00:13:04.094 "data_offset": 2048, 00:13:04.094 "data_size": 63488 00:13:04.094 }, 00:13:04.094 { 00:13:04.094 "name": "BaseBdev4", 00:13:04.094 "uuid": "7b469c83-1717-4fbd-8f4f-448159b9e631", 00:13:04.094 "is_configured": true, 00:13:04.094 "data_offset": 2048, 00:13:04.094 "data_size": 63488 00:13:04.094 } 00:13:04.094 ] 00:13:04.094 } 00:13:04.094 } 00:13:04.094 }' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:04.094 BaseBdev2 00:13:04.094 BaseBdev3 00:13:04.094 BaseBdev4' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.094 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:04.095 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.095 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.095 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.353 [2024-11-05 03:24:17.834221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:04.353 [2024-11-05 03:24:17.834446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.353 [2024-11-05 03:24:17.834557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.353 [2024-11-05 03:24:17.834973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.353 [2024-11-05 03:24:17.834997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73747 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73747 ']' 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73747 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.353 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73747 00:13:04.353 killing process with pid 73747 00:13:04.354 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:04.354 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:04.354 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73747' 00:13:04.354 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73747 00:13:04.354 [2024-11-05 03:24:17.874122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.354 03:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73747 00:13:04.612 [2024-11-05 03:24:18.191177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.547 03:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:05.547 00:13:05.547 real 0m12.708s 00:13:05.547 user 0m21.301s 00:13:05.547 sys 0m1.760s 00:13:05.547 ************************************ 00:13:05.547 END TEST raid_state_function_test_sb 00:13:05.547 ************************************ 00:13:05.547 03:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:05.547 03:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.547 03:24:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:05.547 03:24:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:05.547 03:24:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:05.547 03:24:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.547 ************************************ 00:13:05.547 START TEST raid_superblock_test 00:13:05.547 ************************************ 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74433 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74433 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74433 ']' 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:05.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:05.547 03:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.805 [2024-11-05 03:24:19.259152] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:05.805 [2024-11-05 03:24:19.259364] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74433 ] 00:13:05.805 [2024-11-05 03:24:19.441681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.064 [2024-11-05 03:24:19.555594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.322 [2024-11-05 03:24:19.734400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.322 [2024-11-05 03:24:19.734493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.890 malloc1 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.890 [2024-11-05 03:24:20.303848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:06.890 [2024-11-05 03:24:20.304100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.890 [2024-11-05 03:24:20.304177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.890 [2024-11-05 03:24:20.304425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.890 [2024-11-05 03:24:20.307287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.890 [2024-11-05 03:24:20.307455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:06.890 pt1 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.890 malloc2 00:13:06.890 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.891 [2024-11-05 03:24:20.358637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.891 [2024-11-05 03:24:20.358722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.891 [2024-11-05 03:24:20.358751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.891 [2024-11-05 03:24:20.358766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.891 [2024-11-05 03:24:20.361610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.891 [2024-11-05 03:24:20.361655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.891 pt2 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.891 malloc3 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.891 [2024-11-05 03:24:20.421614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:06.891 [2024-11-05 03:24:20.421695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.891 [2024-11-05 03:24:20.421728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:06.891 [2024-11-05 03:24:20.421744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.891 [2024-11-05 03:24:20.424424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.891 [2024-11-05 03:24:20.424617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:06.891 pt3 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.891 malloc4 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.891 [2024-11-05 03:24:20.474460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:06.891 [2024-11-05 03:24:20.474534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.891 [2024-11-05 03:24:20.474561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:06.891 [2024-11-05 03:24:20.474575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.891 [2024-11-05 03:24:20.477213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.891 [2024-11-05 03:24:20.477253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:06.891 pt4 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.891 [2024-11-05 03:24:20.486482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:06.891 [2024-11-05 03:24:20.488783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.891 [2024-11-05 03:24:20.488878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.891 [2024-11-05 03:24:20.488937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:06.891 [2024-11-05 03:24:20.489147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:06.891 [2024-11-05 03:24:20.489167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.891 [2024-11-05 03:24:20.489488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:06.891 [2024-11-05 03:24:20.489723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:06.891 [2024-11-05 03:24:20.489746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:06.891 [2024-11-05 03:24:20.489950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.891 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.149 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.149 "name": "raid_bdev1", 00:13:07.149 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:07.149 "strip_size_kb": 0, 00:13:07.149 "state": "online", 00:13:07.149 "raid_level": "raid1", 00:13:07.149 "superblock": true, 00:13:07.149 "num_base_bdevs": 4, 00:13:07.149 "num_base_bdevs_discovered": 4, 00:13:07.149 "num_base_bdevs_operational": 4, 00:13:07.149 "base_bdevs_list": [ 00:13:07.149 { 00:13:07.149 "name": "pt1", 00:13:07.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.149 "is_configured": true, 00:13:07.149 "data_offset": 2048, 00:13:07.149 "data_size": 63488 00:13:07.149 }, 00:13:07.149 { 00:13:07.149 "name": "pt2", 00:13:07.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.149 "is_configured": true, 00:13:07.149 "data_offset": 2048, 00:13:07.149 "data_size": 63488 00:13:07.149 }, 00:13:07.149 { 00:13:07.149 "name": "pt3", 00:13:07.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.149 "is_configured": true, 00:13:07.149 "data_offset": 2048, 00:13:07.149 "data_size": 63488 00:13:07.149 }, 00:13:07.149 { 00:13:07.149 "name": "pt4", 00:13:07.149 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:07.149 "is_configured": true, 00:13:07.149 "data_offset": 2048, 00:13:07.149 "data_size": 63488 00:13:07.149 } 00:13:07.149 ] 00:13:07.149 }' 00:13:07.149 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.149 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.408 [2024-11-05 03:24:20.967067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.408 03:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.408 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.408 "name": "raid_bdev1", 00:13:07.408 "aliases": [ 00:13:07.408 "a7a61e77-947f-4010-a693-398cdbc4cbfa" 00:13:07.408 ], 00:13:07.408 "product_name": "Raid Volume", 00:13:07.408 "block_size": 512, 00:13:07.408 "num_blocks": 63488, 00:13:07.408 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:07.408 "assigned_rate_limits": { 00:13:07.408 "rw_ios_per_sec": 0, 00:13:07.408 "rw_mbytes_per_sec": 0, 00:13:07.408 "r_mbytes_per_sec": 0, 00:13:07.408 "w_mbytes_per_sec": 0 00:13:07.408 }, 00:13:07.408 "claimed": false, 00:13:07.408 "zoned": false, 00:13:07.408 "supported_io_types": { 00:13:07.408 "read": true, 00:13:07.408 "write": true, 00:13:07.408 "unmap": false, 00:13:07.408 "flush": false, 00:13:07.408 "reset": true, 00:13:07.408 "nvme_admin": false, 00:13:07.408 "nvme_io": false, 00:13:07.408 "nvme_io_md": false, 00:13:07.408 "write_zeroes": true, 00:13:07.408 "zcopy": false, 00:13:07.408 "get_zone_info": false, 00:13:07.408 "zone_management": false, 00:13:07.408 "zone_append": false, 00:13:07.408 "compare": false, 00:13:07.408 "compare_and_write": false, 00:13:07.408 "abort": false, 00:13:07.408 "seek_hole": false, 00:13:07.408 "seek_data": false, 00:13:07.408 "copy": false, 00:13:07.408 "nvme_iov_md": false 00:13:07.408 }, 00:13:07.408 "memory_domains": [ 00:13:07.408 { 00:13:07.408 "dma_device_id": "system", 00:13:07.408 "dma_device_type": 1 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.408 "dma_device_type": 2 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "dma_device_id": "system", 00:13:07.408 "dma_device_type": 1 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.408 "dma_device_type": 2 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "dma_device_id": "system", 00:13:07.408 "dma_device_type": 1 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.408 "dma_device_type": 2 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "dma_device_id": "system", 00:13:07.408 "dma_device_type": 1 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.408 "dma_device_type": 2 00:13:07.408 } 00:13:07.408 ], 00:13:07.408 "driver_specific": { 00:13:07.408 "raid": { 00:13:07.408 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:07.408 "strip_size_kb": 0, 00:13:07.408 "state": "online", 00:13:07.408 "raid_level": "raid1", 00:13:07.408 "superblock": true, 00:13:07.408 "num_base_bdevs": 4, 00:13:07.408 "num_base_bdevs_discovered": 4, 00:13:07.408 "num_base_bdevs_operational": 4, 00:13:07.408 "base_bdevs_list": [ 00:13:07.408 { 00:13:07.408 "name": "pt1", 00:13:07.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.408 "is_configured": true, 00:13:07.408 "data_offset": 2048, 00:13:07.408 "data_size": 63488 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "name": "pt2", 00:13:07.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.408 "is_configured": true, 00:13:07.408 "data_offset": 2048, 00:13:07.408 "data_size": 63488 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "name": "pt3", 00:13:07.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.408 "is_configured": true, 00:13:07.408 "data_offset": 2048, 00:13:07.408 "data_size": 63488 00:13:07.408 }, 00:13:07.408 { 00:13:07.408 "name": "pt4", 00:13:07.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:07.408 "is_configured": true, 00:13:07.408 "data_offset": 2048, 00:13:07.408 "data_size": 63488 00:13:07.408 } 00:13:07.408 ] 00:13:07.408 } 00:13:07.408 } 00:13:07.408 }' 00:13:07.408 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:07.667 pt2 00:13:07.667 pt3 00:13:07.667 pt4' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.667 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 [2024-11-05 03:24:21.343062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a7a61e77-947f-4010-a693-398cdbc4cbfa 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a7a61e77-947f-4010-a693-398cdbc4cbfa ']' 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 [2024-11-05 03:24:21.394789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.926 [2024-11-05 03:24:21.394970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.926 [2024-11-05 03:24:21.395191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.926 [2024-11-05 03:24:21.395422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.926 [2024-11-05 03:24:21.395459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.926 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.185 [2024-11-05 03:24:21.562818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:08.185 [2024-11-05 03:24:21.565257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:08.185 [2024-11-05 03:24:21.565352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:08.185 [2024-11-05 03:24:21.565587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:08.185 [2024-11-05 03:24:21.565717] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:08.185 [2024-11-05 03:24:21.565795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:08.185 [2024-11-05 03:24:21.565835] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:08.185 [2024-11-05 03:24:21.565881] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:08.185 [2024-11-05 03:24:21.565932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.185 [2024-11-05 03:24:21.565947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:08.185 request: 00:13:08.185 { 00:13:08.185 "name": "raid_bdev1", 00:13:08.185 "raid_level": "raid1", 00:13:08.185 "base_bdevs": [ 00:13:08.185 "malloc1", 00:13:08.185 "malloc2", 00:13:08.185 "malloc3", 00:13:08.185 "malloc4" 00:13:08.185 ], 00:13:08.185 "superblock": false, 00:13:08.185 "method": "bdev_raid_create", 00:13:08.185 "req_id": 1 00:13:08.185 } 00:13:08.185 Got JSON-RPC error response 00:13:08.185 response: 00:13:08.185 { 00:13:08.185 "code": -17, 00:13:08.185 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:08.185 } 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.185 [2024-11-05 03:24:21.630829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:08.185 [2024-11-05 03:24:21.631060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.185 [2024-11-05 03:24:21.631121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:08.185 [2024-11-05 03:24:21.631221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.185 [2024-11-05 03:24:21.634160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.185 [2024-11-05 03:24:21.634431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:08.185 [2024-11-05 03:24:21.634622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:08.185 [2024-11-05 03:24:21.634878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:08.185 pt1 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.185 "name": "raid_bdev1", 00:13:08.185 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:08.185 "strip_size_kb": 0, 00:13:08.185 "state": "configuring", 00:13:08.185 "raid_level": "raid1", 00:13:08.185 "superblock": true, 00:13:08.185 "num_base_bdevs": 4, 00:13:08.185 "num_base_bdevs_discovered": 1, 00:13:08.185 "num_base_bdevs_operational": 4, 00:13:08.185 "base_bdevs_list": [ 00:13:08.185 { 00:13:08.185 "name": "pt1", 00:13:08.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.185 "is_configured": true, 00:13:08.185 "data_offset": 2048, 00:13:08.185 "data_size": 63488 00:13:08.185 }, 00:13:08.185 { 00:13:08.185 "name": null, 00:13:08.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.185 "is_configured": false, 00:13:08.185 "data_offset": 2048, 00:13:08.185 "data_size": 63488 00:13:08.185 }, 00:13:08.185 { 00:13:08.185 "name": null, 00:13:08.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.185 "is_configured": false, 00:13:08.185 "data_offset": 2048, 00:13:08.185 "data_size": 63488 00:13:08.185 }, 00:13:08.185 { 00:13:08.185 "name": null, 00:13:08.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:08.185 "is_configured": false, 00:13:08.185 "data_offset": 2048, 00:13:08.185 "data_size": 63488 00:13:08.185 } 00:13:08.185 ] 00:13:08.185 }' 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.185 03:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.752 [2024-11-05 03:24:22.155346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:08.752 [2024-11-05 03:24:22.155428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.752 [2024-11-05 03:24:22.155457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:08.752 [2024-11-05 03:24:22.155475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.752 [2024-11-05 03:24:22.156017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.752 [2024-11-05 03:24:22.156050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:08.752 [2024-11-05 03:24:22.156138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:08.752 [2024-11-05 03:24:22.156192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.752 pt2 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.752 [2024-11-05 03:24:22.163295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.752 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.753 "name": "raid_bdev1", 00:13:08.753 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:08.753 "strip_size_kb": 0, 00:13:08.753 "state": "configuring", 00:13:08.753 "raid_level": "raid1", 00:13:08.753 "superblock": true, 00:13:08.753 "num_base_bdevs": 4, 00:13:08.753 "num_base_bdevs_discovered": 1, 00:13:08.753 "num_base_bdevs_operational": 4, 00:13:08.753 "base_bdevs_list": [ 00:13:08.753 { 00:13:08.753 "name": "pt1", 00:13:08.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.753 "is_configured": true, 00:13:08.753 "data_offset": 2048, 00:13:08.753 "data_size": 63488 00:13:08.753 }, 00:13:08.753 { 00:13:08.753 "name": null, 00:13:08.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.753 "is_configured": false, 00:13:08.753 "data_offset": 0, 00:13:08.753 "data_size": 63488 00:13:08.753 }, 00:13:08.753 { 00:13:08.753 "name": null, 00:13:08.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.753 "is_configured": false, 00:13:08.753 "data_offset": 2048, 00:13:08.753 "data_size": 63488 00:13:08.753 }, 00:13:08.753 { 00:13:08.753 "name": null, 00:13:08.753 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:08.753 "is_configured": false, 00:13:08.753 "data_offset": 2048, 00:13:08.753 "data_size": 63488 00:13:08.753 } 00:13:08.753 ] 00:13:08.753 }' 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.753 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.344 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:09.344 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:09.344 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:09.344 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.344 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.344 [2024-11-05 03:24:22.679491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:09.344 [2024-11-05 03:24:22.679571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.344 [2024-11-05 03:24:22.679607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:09.344 [2024-11-05 03:24:22.679624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.344 [2024-11-05 03:24:22.680144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.344 [2024-11-05 03:24:22.680166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:09.344 [2024-11-05 03:24:22.680260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:09.344 [2024-11-05 03:24:22.680287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:09.344 pt2 00:13:09.344 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.344 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.345 [2024-11-05 03:24:22.687442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:09.345 [2024-11-05 03:24:22.687509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.345 [2024-11-05 03:24:22.687535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:09.345 [2024-11-05 03:24:22.687548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.345 [2024-11-05 03:24:22.687971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.345 [2024-11-05 03:24:22.687999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:09.345 [2024-11-05 03:24:22.688069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:09.345 [2024-11-05 03:24:22.688092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:09.345 pt3 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.345 [2024-11-05 03:24:22.695394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:09.345 [2024-11-05 03:24:22.695458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.345 [2024-11-05 03:24:22.695481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:09.345 [2024-11-05 03:24:22.695494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.345 [2024-11-05 03:24:22.695905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.345 [2024-11-05 03:24:22.695932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:09.345 [2024-11-05 03:24:22.696000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:09.345 [2024-11-05 03:24:22.696024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:09.345 [2024-11-05 03:24:22.696216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:09.345 [2024-11-05 03:24:22.696230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.345 [2024-11-05 03:24:22.696571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:09.345 [2024-11-05 03:24:22.696882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:09.345 [2024-11-05 03:24:22.696908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:09.345 [2024-11-05 03:24:22.697055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.345 pt4 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.345 "name": "raid_bdev1", 00:13:09.345 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:09.345 "strip_size_kb": 0, 00:13:09.345 "state": "online", 00:13:09.345 "raid_level": "raid1", 00:13:09.345 "superblock": true, 00:13:09.345 "num_base_bdevs": 4, 00:13:09.345 "num_base_bdevs_discovered": 4, 00:13:09.345 "num_base_bdevs_operational": 4, 00:13:09.345 "base_bdevs_list": [ 00:13:09.345 { 00:13:09.345 "name": "pt1", 00:13:09.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:09.345 "is_configured": true, 00:13:09.345 "data_offset": 2048, 00:13:09.345 "data_size": 63488 00:13:09.345 }, 00:13:09.345 { 00:13:09.345 "name": "pt2", 00:13:09.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.345 "is_configured": true, 00:13:09.345 "data_offset": 2048, 00:13:09.345 "data_size": 63488 00:13:09.345 }, 00:13:09.345 { 00:13:09.345 "name": "pt3", 00:13:09.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.345 "is_configured": true, 00:13:09.345 "data_offset": 2048, 00:13:09.345 "data_size": 63488 00:13:09.345 }, 00:13:09.345 { 00:13:09.345 "name": "pt4", 00:13:09.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:09.345 "is_configured": true, 00:13:09.345 "data_offset": 2048, 00:13:09.345 "data_size": 63488 00:13:09.345 } 00:13:09.345 ] 00:13:09.345 }' 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.345 03:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.604 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.604 [2024-11-05 03:24:23.220016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:09.863 "name": "raid_bdev1", 00:13:09.863 "aliases": [ 00:13:09.863 "a7a61e77-947f-4010-a693-398cdbc4cbfa" 00:13:09.863 ], 00:13:09.863 "product_name": "Raid Volume", 00:13:09.863 "block_size": 512, 00:13:09.863 "num_blocks": 63488, 00:13:09.863 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:09.863 "assigned_rate_limits": { 00:13:09.863 "rw_ios_per_sec": 0, 00:13:09.863 "rw_mbytes_per_sec": 0, 00:13:09.863 "r_mbytes_per_sec": 0, 00:13:09.863 "w_mbytes_per_sec": 0 00:13:09.863 }, 00:13:09.863 "claimed": false, 00:13:09.863 "zoned": false, 00:13:09.863 "supported_io_types": { 00:13:09.863 "read": true, 00:13:09.863 "write": true, 00:13:09.863 "unmap": false, 00:13:09.863 "flush": false, 00:13:09.863 "reset": true, 00:13:09.863 "nvme_admin": false, 00:13:09.863 "nvme_io": false, 00:13:09.863 "nvme_io_md": false, 00:13:09.863 "write_zeroes": true, 00:13:09.863 "zcopy": false, 00:13:09.863 "get_zone_info": false, 00:13:09.863 "zone_management": false, 00:13:09.863 "zone_append": false, 00:13:09.863 "compare": false, 00:13:09.863 "compare_and_write": false, 00:13:09.863 "abort": false, 00:13:09.863 "seek_hole": false, 00:13:09.863 "seek_data": false, 00:13:09.863 "copy": false, 00:13:09.863 "nvme_iov_md": false 00:13:09.863 }, 00:13:09.863 "memory_domains": [ 00:13:09.863 { 00:13:09.863 "dma_device_id": "system", 00:13:09.863 "dma_device_type": 1 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.863 "dma_device_type": 2 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "dma_device_id": "system", 00:13:09.863 "dma_device_type": 1 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.863 "dma_device_type": 2 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "dma_device_id": "system", 00:13:09.863 "dma_device_type": 1 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.863 "dma_device_type": 2 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "dma_device_id": "system", 00:13:09.863 "dma_device_type": 1 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.863 "dma_device_type": 2 00:13:09.863 } 00:13:09.863 ], 00:13:09.863 "driver_specific": { 00:13:09.863 "raid": { 00:13:09.863 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:09.863 "strip_size_kb": 0, 00:13:09.863 "state": "online", 00:13:09.863 "raid_level": "raid1", 00:13:09.863 "superblock": true, 00:13:09.863 "num_base_bdevs": 4, 00:13:09.863 "num_base_bdevs_discovered": 4, 00:13:09.863 "num_base_bdevs_operational": 4, 00:13:09.863 "base_bdevs_list": [ 00:13:09.863 { 00:13:09.863 "name": "pt1", 00:13:09.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:09.863 "is_configured": true, 00:13:09.863 "data_offset": 2048, 00:13:09.863 "data_size": 63488 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "name": "pt2", 00:13:09.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.863 "is_configured": true, 00:13:09.863 "data_offset": 2048, 00:13:09.863 "data_size": 63488 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "name": "pt3", 00:13:09.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.863 "is_configured": true, 00:13:09.863 "data_offset": 2048, 00:13:09.863 "data_size": 63488 00:13:09.863 }, 00:13:09.863 { 00:13:09.863 "name": "pt4", 00:13:09.863 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:09.863 "is_configured": true, 00:13:09.863 "data_offset": 2048, 00:13:09.863 "data_size": 63488 00:13:09.863 } 00:13:09.863 ] 00:13:09.863 } 00:13:09.863 } 00:13:09.863 }' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:09.863 pt2 00:13:09.863 pt3 00:13:09.863 pt4' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.863 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:10.122 [2024-11-05 03:24:23.584047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a7a61e77-947f-4010-a693-398cdbc4cbfa '!=' a7a61e77-947f-4010-a693-398cdbc4cbfa ']' 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.122 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.122 [2024-11-05 03:24:23.635759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.123 "name": "raid_bdev1", 00:13:10.123 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:10.123 "strip_size_kb": 0, 00:13:10.123 "state": "online", 00:13:10.123 "raid_level": "raid1", 00:13:10.123 "superblock": true, 00:13:10.123 "num_base_bdevs": 4, 00:13:10.123 "num_base_bdevs_discovered": 3, 00:13:10.123 "num_base_bdevs_operational": 3, 00:13:10.123 "base_bdevs_list": [ 00:13:10.123 { 00:13:10.123 "name": null, 00:13:10.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.123 "is_configured": false, 00:13:10.123 "data_offset": 0, 00:13:10.123 "data_size": 63488 00:13:10.123 }, 00:13:10.123 { 00:13:10.123 "name": "pt2", 00:13:10.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.123 "is_configured": true, 00:13:10.123 "data_offset": 2048, 00:13:10.123 "data_size": 63488 00:13:10.123 }, 00:13:10.123 { 00:13:10.123 "name": "pt3", 00:13:10.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.123 "is_configured": true, 00:13:10.123 "data_offset": 2048, 00:13:10.123 "data_size": 63488 00:13:10.123 }, 00:13:10.123 { 00:13:10.123 "name": "pt4", 00:13:10.123 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.123 "is_configured": true, 00:13:10.123 "data_offset": 2048, 00:13:10.123 "data_size": 63488 00:13:10.123 } 00:13:10.123 ] 00:13:10.123 }' 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.123 03:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.690 [2024-11-05 03:24:24.175954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.690 [2024-11-05 03:24:24.175988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.690 [2024-11-05 03:24:24.176074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.690 [2024-11-05 03:24:24.176162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.690 [2024-11-05 03:24:24.176176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:10.690 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 [2024-11-05 03:24:24.271993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.691 [2024-11-05 03:24:24.272070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.691 [2024-11-05 03:24:24.272098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:10.691 [2024-11-05 03:24:24.272112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.691 [2024-11-05 03:24:24.275081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.691 [2024-11-05 03:24:24.275122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.691 [2024-11-05 03:24:24.275231] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:10.691 [2024-11-05 03:24:24.275283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.691 pt2 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.949 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.949 "name": "raid_bdev1", 00:13:10.949 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:10.949 "strip_size_kb": 0, 00:13:10.949 "state": "configuring", 00:13:10.949 "raid_level": "raid1", 00:13:10.949 "superblock": true, 00:13:10.949 "num_base_bdevs": 4, 00:13:10.949 "num_base_bdevs_discovered": 1, 00:13:10.949 "num_base_bdevs_operational": 3, 00:13:10.949 "base_bdevs_list": [ 00:13:10.949 { 00:13:10.949 "name": null, 00:13:10.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.949 "is_configured": false, 00:13:10.949 "data_offset": 2048, 00:13:10.949 "data_size": 63488 00:13:10.949 }, 00:13:10.949 { 00:13:10.949 "name": "pt2", 00:13:10.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.949 "is_configured": true, 00:13:10.949 "data_offset": 2048, 00:13:10.949 "data_size": 63488 00:13:10.949 }, 00:13:10.949 { 00:13:10.949 "name": null, 00:13:10.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.949 "is_configured": false, 00:13:10.949 "data_offset": 2048, 00:13:10.949 "data_size": 63488 00:13:10.949 }, 00:13:10.949 { 00:13:10.949 "name": null, 00:13:10.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.949 "is_configured": false, 00:13:10.949 "data_offset": 2048, 00:13:10.949 "data_size": 63488 00:13:10.949 } 00:13:10.949 ] 00:13:10.949 }' 00:13:10.949 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.949 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 [2024-11-05 03:24:24.816159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:11.208 [2024-11-05 03:24:24.816243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.208 [2024-11-05 03:24:24.816274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:11.208 [2024-11-05 03:24:24.816288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.208 [2024-11-05 03:24:24.817120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.208 [2024-11-05 03:24:24.817211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:11.208 [2024-11-05 03:24:24.817373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:11.208 [2024-11-05 03:24:24.817406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:11.208 pt3 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.208 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.466 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.466 "name": "raid_bdev1", 00:13:11.466 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:11.466 "strip_size_kb": 0, 00:13:11.466 "state": "configuring", 00:13:11.466 "raid_level": "raid1", 00:13:11.466 "superblock": true, 00:13:11.466 "num_base_bdevs": 4, 00:13:11.466 "num_base_bdevs_discovered": 2, 00:13:11.466 "num_base_bdevs_operational": 3, 00:13:11.466 "base_bdevs_list": [ 00:13:11.466 { 00:13:11.466 "name": null, 00:13:11.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.467 "is_configured": false, 00:13:11.467 "data_offset": 2048, 00:13:11.467 "data_size": 63488 00:13:11.467 }, 00:13:11.467 { 00:13:11.467 "name": "pt2", 00:13:11.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.467 "is_configured": true, 00:13:11.467 "data_offset": 2048, 00:13:11.467 "data_size": 63488 00:13:11.467 }, 00:13:11.467 { 00:13:11.467 "name": "pt3", 00:13:11.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.467 "is_configured": true, 00:13:11.467 "data_offset": 2048, 00:13:11.467 "data_size": 63488 00:13:11.467 }, 00:13:11.467 { 00:13:11.467 "name": null, 00:13:11.467 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.467 "is_configured": false, 00:13:11.467 "data_offset": 2048, 00:13:11.467 "data_size": 63488 00:13:11.467 } 00:13:11.467 ] 00:13:11.467 }' 00:13:11.467 03:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.467 03:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.725 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.726 [2024-11-05 03:24:25.344324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:11.726 [2024-11-05 03:24:25.344440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.726 [2024-11-05 03:24:25.344475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:11.726 [2024-11-05 03:24:25.344491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.726 [2024-11-05 03:24:25.345035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.726 [2024-11-05 03:24:25.345058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:11.726 [2024-11-05 03:24:25.345151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:11.726 [2024-11-05 03:24:25.345186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:11.726 [2024-11-05 03:24:25.345389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:11.726 [2024-11-05 03:24:25.345406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.726 [2024-11-05 03:24:25.345742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:11.726 [2024-11-05 03:24:25.346021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:11.726 [2024-11-05 03:24:25.346046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:11.726 [2024-11-05 03:24:25.346202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.726 pt4 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.726 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.985 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.985 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.985 "name": "raid_bdev1", 00:13:11.985 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:11.985 "strip_size_kb": 0, 00:13:11.985 "state": "online", 00:13:11.985 "raid_level": "raid1", 00:13:11.985 "superblock": true, 00:13:11.985 "num_base_bdevs": 4, 00:13:11.985 "num_base_bdevs_discovered": 3, 00:13:11.985 "num_base_bdevs_operational": 3, 00:13:11.985 "base_bdevs_list": [ 00:13:11.985 { 00:13:11.985 "name": null, 00:13:11.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.985 "is_configured": false, 00:13:11.985 "data_offset": 2048, 00:13:11.985 "data_size": 63488 00:13:11.985 }, 00:13:11.985 { 00:13:11.985 "name": "pt2", 00:13:11.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.985 "is_configured": true, 00:13:11.985 "data_offset": 2048, 00:13:11.985 "data_size": 63488 00:13:11.985 }, 00:13:11.985 { 00:13:11.985 "name": "pt3", 00:13:11.985 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.985 "is_configured": true, 00:13:11.985 "data_offset": 2048, 00:13:11.985 "data_size": 63488 00:13:11.985 }, 00:13:11.985 { 00:13:11.985 "name": "pt4", 00:13:11.985 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.985 "is_configured": true, 00:13:11.985 "data_offset": 2048, 00:13:11.985 "data_size": 63488 00:13:11.985 } 00:13:11.985 ] 00:13:11.985 }' 00:13:11.985 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.985 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.243 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.243 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.243 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.243 [2024-11-05 03:24:25.876441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.243 [2024-11-05 03:24:25.876474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.243 [2024-11-05 03:24:25.876568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.243 [2024-11-05 03:24:25.876731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.243 [2024-11-05 03:24:25.876751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:12.501 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.502 [2024-11-05 03:24:25.944447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.502 [2024-11-05 03:24:25.944524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.502 [2024-11-05 03:24:25.944551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:12.502 [2024-11-05 03:24:25.944572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.502 [2024-11-05 03:24:25.947621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.502 [2024-11-05 03:24:25.947672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.502 [2024-11-05 03:24:25.947795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:12.502 [2024-11-05 03:24:25.947851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.502 [2024-11-05 03:24:25.947998] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:12.502 [2024-11-05 03:24:25.948019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.502 [2024-11-05 03:24:25.948038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:12.502 [2024-11-05 03:24:25.948111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.502 [2024-11-05 03:24:25.948274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.502 pt1 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.502 03:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.502 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.502 "name": "raid_bdev1", 00:13:12.502 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:12.502 "strip_size_kb": 0, 00:13:12.502 "state": "configuring", 00:13:12.502 "raid_level": "raid1", 00:13:12.502 "superblock": true, 00:13:12.502 "num_base_bdevs": 4, 00:13:12.502 "num_base_bdevs_discovered": 2, 00:13:12.502 "num_base_bdevs_operational": 3, 00:13:12.502 "base_bdevs_list": [ 00:13:12.502 { 00:13:12.502 "name": null, 00:13:12.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.502 "is_configured": false, 00:13:12.502 "data_offset": 2048, 00:13:12.502 "data_size": 63488 00:13:12.502 }, 00:13:12.502 { 00:13:12.502 "name": "pt2", 00:13:12.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.502 "is_configured": true, 00:13:12.502 "data_offset": 2048, 00:13:12.502 "data_size": 63488 00:13:12.502 }, 00:13:12.502 { 00:13:12.502 "name": "pt3", 00:13:12.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.502 "is_configured": true, 00:13:12.502 "data_offset": 2048, 00:13:12.502 "data_size": 63488 00:13:12.502 }, 00:13:12.502 { 00:13:12.502 "name": null, 00:13:12.502 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.502 "is_configured": false, 00:13:12.502 "data_offset": 2048, 00:13:12.502 "data_size": 63488 00:13:12.502 } 00:13:12.502 ] 00:13:12.502 }' 00:13:12.502 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.502 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.069 [2024-11-05 03:24:26.528647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:13.069 [2024-11-05 03:24:26.528803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.069 [2024-11-05 03:24:26.528850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:13.069 [2024-11-05 03:24:26.528864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.069 [2024-11-05 03:24:26.529425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.069 [2024-11-05 03:24:26.529467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:13.069 [2024-11-05 03:24:26.529577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:13.069 [2024-11-05 03:24:26.529617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:13.069 [2024-11-05 03:24:26.529789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:13.069 [2024-11-05 03:24:26.529805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:13.069 [2024-11-05 03:24:26.530188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:13.069 [2024-11-05 03:24:26.530396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:13.069 [2024-11-05 03:24:26.530433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:13.069 [2024-11-05 03:24:26.530610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.069 pt4 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.069 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.070 "name": "raid_bdev1", 00:13:13.070 "uuid": "a7a61e77-947f-4010-a693-398cdbc4cbfa", 00:13:13.070 "strip_size_kb": 0, 00:13:13.070 "state": "online", 00:13:13.070 "raid_level": "raid1", 00:13:13.070 "superblock": true, 00:13:13.070 "num_base_bdevs": 4, 00:13:13.070 "num_base_bdevs_discovered": 3, 00:13:13.070 "num_base_bdevs_operational": 3, 00:13:13.070 "base_bdevs_list": [ 00:13:13.070 { 00:13:13.070 "name": null, 00:13:13.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.070 "is_configured": false, 00:13:13.070 "data_offset": 2048, 00:13:13.070 "data_size": 63488 00:13:13.070 }, 00:13:13.070 { 00:13:13.070 "name": "pt2", 00:13:13.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.070 "is_configured": true, 00:13:13.070 "data_offset": 2048, 00:13:13.070 "data_size": 63488 00:13:13.070 }, 00:13:13.070 { 00:13:13.070 "name": "pt3", 00:13:13.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.070 "is_configured": true, 00:13:13.070 "data_offset": 2048, 00:13:13.070 "data_size": 63488 00:13:13.070 }, 00:13:13.070 { 00:13:13.070 "name": "pt4", 00:13:13.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.070 "is_configured": true, 00:13:13.070 "data_offset": 2048, 00:13:13.070 "data_size": 63488 00:13:13.070 } 00:13:13.070 ] 00:13:13.070 }' 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.070 03:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.638 [2024-11-05 03:24:27.121150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a7a61e77-947f-4010-a693-398cdbc4cbfa '!=' a7a61e77-947f-4010-a693-398cdbc4cbfa ']' 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74433 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74433 ']' 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74433 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74433 00:13:13.638 killing process with pid 74433 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74433' 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74433 00:13:13.638 [2024-11-05 03:24:27.195958] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.638 03:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74433 00:13:13.638 [2024-11-05 03:24:27.196069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.639 [2024-11-05 03:24:27.196155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.639 [2024-11-05 03:24:27.196173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:13.897 [2024-11-05 03:24:27.503019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.832 ************************************ 00:13:14.832 END TEST raid_superblock_test 00:13:14.832 ************************************ 00:13:14.832 03:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:14.832 00:13:14.832 real 0m9.275s 00:13:14.832 user 0m15.430s 00:13:14.832 sys 0m1.304s 00:13:14.832 03:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:14.832 03:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.090 03:24:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:15.090 03:24:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:15.090 03:24:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.090 03:24:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.090 ************************************ 00:13:15.090 START TEST raid_read_error_test 00:13:15.091 ************************************ 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WMzcCsHl1C 00:13:15.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74927 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74927 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 74927 ']' 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.091 03:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.091 [2024-11-05 03:24:28.582367] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:15.091 [2024-11-05 03:24:28.582523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74927 ] 00:13:15.349 [2024-11-05 03:24:28.749456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.349 [2024-11-05 03:24:28.865225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.608 [2024-11-05 03:24:29.051930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.608 [2024-11-05 03:24:29.051992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 BaseBdev1_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 true 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 [2024-11-05 03:24:29.593366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:16.175 [2024-11-05 03:24:29.593448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.175 [2024-11-05 03:24:29.593474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:16.175 [2024-11-05 03:24:29.593491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.175 [2024-11-05 03:24:29.596511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.175 [2024-11-05 03:24:29.596708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.175 BaseBdev1 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 BaseBdev2_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 true 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 [2024-11-05 03:24:29.650489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:16.175 [2024-11-05 03:24:29.650586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.175 [2024-11-05 03:24:29.650611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:16.175 [2024-11-05 03:24:29.650627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.175 [2024-11-05 03:24:29.653234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.175 [2024-11-05 03:24:29.653474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.175 BaseBdev2 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 BaseBdev3_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 true 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 [2024-11-05 03:24:29.721698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:16.175 [2024-11-05 03:24:29.721782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.175 [2024-11-05 03:24:29.721807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:16.175 [2024-11-05 03:24:29.721823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.175 [2024-11-05 03:24:29.724673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.175 [2024-11-05 03:24:29.724762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:16.175 BaseBdev3 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 BaseBdev4_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 true 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 [2024-11-05 03:24:29.774686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:16.175 [2024-11-05 03:24:29.774763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.175 [2024-11-05 03:24:29.774787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:16.175 [2024-11-05 03:24:29.774802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.175 [2024-11-05 03:24:29.777482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.175 [2024-11-05 03:24:29.777545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:16.175 BaseBdev4 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 [2024-11-05 03:24:29.782752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.175 [2024-11-05 03:24:29.785117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.175 [2024-11-05 03:24:29.785208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.175 [2024-11-05 03:24:29.785299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:16.175 [2024-11-05 03:24:29.785680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:16.175 [2024-11-05 03:24:29.785703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.175 [2024-11-05 03:24:29.786011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:16.175 [2024-11-05 03:24:29.786244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:16.175 [2024-11-05 03:24:29.786259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:16.175 [2024-11-05 03:24:29.786492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.175 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.434 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.434 "name": "raid_bdev1", 00:13:16.434 "uuid": "90d60950-25c8-4162-9748-053896520c5a", 00:13:16.434 "strip_size_kb": 0, 00:13:16.434 "state": "online", 00:13:16.434 "raid_level": "raid1", 00:13:16.434 "superblock": true, 00:13:16.434 "num_base_bdevs": 4, 00:13:16.434 "num_base_bdevs_discovered": 4, 00:13:16.434 "num_base_bdevs_operational": 4, 00:13:16.434 "base_bdevs_list": [ 00:13:16.434 { 00:13:16.434 "name": "BaseBdev1", 00:13:16.434 "uuid": "2cc671ec-a7e6-5376-8f00-65606e1f212c", 00:13:16.434 "is_configured": true, 00:13:16.434 "data_offset": 2048, 00:13:16.434 "data_size": 63488 00:13:16.434 }, 00:13:16.434 { 00:13:16.434 "name": "BaseBdev2", 00:13:16.434 "uuid": "bb1782b6-030a-5b06-9047-c362efa116df", 00:13:16.434 "is_configured": true, 00:13:16.434 "data_offset": 2048, 00:13:16.434 "data_size": 63488 00:13:16.434 }, 00:13:16.434 { 00:13:16.434 "name": "BaseBdev3", 00:13:16.434 "uuid": "875d6558-1fe6-55da-89e4-f191bc0f8267", 00:13:16.434 "is_configured": true, 00:13:16.434 "data_offset": 2048, 00:13:16.434 "data_size": 63488 00:13:16.434 }, 00:13:16.434 { 00:13:16.434 "name": "BaseBdev4", 00:13:16.434 "uuid": "337dbeac-9e5e-53e1-9434-bbba2fe72b05", 00:13:16.434 "is_configured": true, 00:13:16.434 "data_offset": 2048, 00:13:16.434 "data_size": 63488 00:13:16.434 } 00:13:16.434 ] 00:13:16.434 }' 00:13:16.434 03:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.434 03:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.693 03:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:16.693 03:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.953 [2024-11-05 03:24:30.440419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:17.908 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:17.908 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.909 "name": "raid_bdev1", 00:13:17.909 "uuid": "90d60950-25c8-4162-9748-053896520c5a", 00:13:17.909 "strip_size_kb": 0, 00:13:17.909 "state": "online", 00:13:17.909 "raid_level": "raid1", 00:13:17.909 "superblock": true, 00:13:17.909 "num_base_bdevs": 4, 00:13:17.909 "num_base_bdevs_discovered": 4, 00:13:17.909 "num_base_bdevs_operational": 4, 00:13:17.909 "base_bdevs_list": [ 00:13:17.909 { 00:13:17.909 "name": "BaseBdev1", 00:13:17.909 "uuid": "2cc671ec-a7e6-5376-8f00-65606e1f212c", 00:13:17.909 "is_configured": true, 00:13:17.909 "data_offset": 2048, 00:13:17.909 "data_size": 63488 00:13:17.909 }, 00:13:17.909 { 00:13:17.909 "name": "BaseBdev2", 00:13:17.909 "uuid": "bb1782b6-030a-5b06-9047-c362efa116df", 00:13:17.909 "is_configured": true, 00:13:17.909 "data_offset": 2048, 00:13:17.909 "data_size": 63488 00:13:17.909 }, 00:13:17.909 { 00:13:17.909 "name": "BaseBdev3", 00:13:17.909 "uuid": "875d6558-1fe6-55da-89e4-f191bc0f8267", 00:13:17.909 "is_configured": true, 00:13:17.909 "data_offset": 2048, 00:13:17.909 "data_size": 63488 00:13:17.909 }, 00:13:17.909 { 00:13:17.909 "name": "BaseBdev4", 00:13:17.909 "uuid": "337dbeac-9e5e-53e1-9434-bbba2fe72b05", 00:13:17.909 "is_configured": true, 00:13:17.909 "data_offset": 2048, 00:13:17.909 "data_size": 63488 00:13:17.909 } 00:13:17.909 ] 00:13:17.909 }' 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.909 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.476 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:18.476 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.476 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.476 [2024-11-05 03:24:31.862760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.476 [2024-11-05 03:24:31.862986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.476 { 00:13:18.476 "results": [ 00:13:18.476 { 00:13:18.476 "job": "raid_bdev1", 00:13:18.476 "core_mask": "0x1", 00:13:18.476 "workload": "randrw", 00:13:18.476 "percentage": 50, 00:13:18.476 "status": "finished", 00:13:18.476 "queue_depth": 1, 00:13:18.476 "io_size": 131072, 00:13:18.476 "runtime": 1.420206, 00:13:18.476 "iops": 8195.994102264038, 00:13:18.476 "mibps": 1024.4992627830047, 00:13:18.476 "io_failed": 0, 00:13:18.477 "io_timeout": 0, 00:13:18.477 "avg_latency_us": 118.063575132771, 00:13:18.477 "min_latency_us": 37.70181818181818, 00:13:18.477 "max_latency_us": 1891.6072727272726 00:13:18.477 } 00:13:18.477 ], 00:13:18.477 "core_count": 1 00:13:18.477 } 00:13:18.477 [2024-11-05 03:24:31.866486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.477 [2024-11-05 03:24:31.866593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.477 [2024-11-05 03:24:31.866859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.477 [2024-11-05 03:24:31.866882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74927 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 74927 ']' 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 74927 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74927 00:13:18.477 killing process with pid 74927 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74927' 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 74927 00:13:18.477 [2024-11-05 03:24:31.906833] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.477 03:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 74927 00:13:18.735 [2024-11-05 03:24:32.155584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WMzcCsHl1C 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:19.677 ************************************ 00:13:19.677 END TEST raid_read_error_test 00:13:19.677 ************************************ 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:19.677 00:13:19.677 real 0m4.708s 00:13:19.677 user 0m5.855s 00:13:19.677 sys 0m0.575s 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.677 03:24:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.677 03:24:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:19.677 03:24:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:19.677 03:24:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.677 03:24:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.677 ************************************ 00:13:19.677 START TEST raid_write_error_test 00:13:19.677 ************************************ 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:19.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0NrlltlEt2 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75073 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75073 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75073 ']' 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.677 03:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.936 [2024-11-05 03:24:33.362266] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:19.936 [2024-11-05 03:24:33.362469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75073 ] 00:13:19.936 [2024-11-05 03:24:33.542946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.195 [2024-11-05 03:24:33.660246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.454 [2024-11-05 03:24:33.859639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.454 [2024-11-05 03:24:33.859702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.713 BaseBdev1_malloc 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.713 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.972 true 00:13:20.972 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.972 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:20.972 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.972 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 [2024-11-05 03:24:34.355275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:20.973 [2024-11-05 03:24:34.355409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.973 [2024-11-05 03:24:34.355440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:20.973 [2024-11-05 03:24:34.355455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.973 [2024-11-05 03:24:34.358045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.973 [2024-11-05 03:24:34.358110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.973 BaseBdev1 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 BaseBdev2_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 true 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 [2024-11-05 03:24:34.416131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:20.973 [2024-11-05 03:24:34.416214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.973 [2024-11-05 03:24:34.416237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:20.973 [2024-11-05 03:24:34.416253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.973 [2024-11-05 03:24:34.419032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.973 [2024-11-05 03:24:34.419105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.973 BaseBdev2 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 BaseBdev3_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 true 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 [2024-11-05 03:24:34.487401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:20.973 [2024-11-05 03:24:34.487514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.973 [2024-11-05 03:24:34.487558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:20.973 [2024-11-05 03:24:34.487575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.973 [2024-11-05 03:24:34.490493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.973 [2024-11-05 03:24:34.490554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:20.973 BaseBdev3 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 BaseBdev4_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 true 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 [2024-11-05 03:24:34.543961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:20.973 [2024-11-05 03:24:34.544035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.973 [2024-11-05 03:24:34.544073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:20.973 [2024-11-05 03:24:34.544088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.973 [2024-11-05 03:24:34.546777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.973 [2024-11-05 03:24:34.546841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:20.973 BaseBdev4 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 [2024-11-05 03:24:34.556029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.973 [2024-11-05 03:24:34.558459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.973 [2024-11-05 03:24:34.558554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.973 [2024-11-05 03:24:34.558664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.973 [2024-11-05 03:24:34.558956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:20.973 [2024-11-05 03:24:34.558976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.973 [2024-11-05 03:24:34.559280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:20.973 [2024-11-05 03:24:34.559500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:20.973 [2024-11-05 03:24:34.559516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:20.973 [2024-11-05 03:24:34.559693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.973 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.974 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.974 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.974 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.974 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.232 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.232 "name": "raid_bdev1", 00:13:21.232 "uuid": "7e0b895c-0361-4af8-bbdd-755a01438c3e", 00:13:21.232 "strip_size_kb": 0, 00:13:21.232 "state": "online", 00:13:21.232 "raid_level": "raid1", 00:13:21.232 "superblock": true, 00:13:21.232 "num_base_bdevs": 4, 00:13:21.232 "num_base_bdevs_discovered": 4, 00:13:21.232 "num_base_bdevs_operational": 4, 00:13:21.232 "base_bdevs_list": [ 00:13:21.232 { 00:13:21.232 "name": "BaseBdev1", 00:13:21.232 "uuid": "74adca93-39ba-5bc1-9c57-e79eb6f71830", 00:13:21.232 "is_configured": true, 00:13:21.232 "data_offset": 2048, 00:13:21.232 "data_size": 63488 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "name": "BaseBdev2", 00:13:21.232 "uuid": "c64718fa-b6a0-506f-b6a4-ee7baa8de84d", 00:13:21.232 "is_configured": true, 00:13:21.232 "data_offset": 2048, 00:13:21.232 "data_size": 63488 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "name": "BaseBdev3", 00:13:21.232 "uuid": "49a929fc-76ac-5d25-b5ad-a04d87d84ba9", 00:13:21.232 "is_configured": true, 00:13:21.232 "data_offset": 2048, 00:13:21.232 "data_size": 63488 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "name": "BaseBdev4", 00:13:21.232 "uuid": "3437bbea-de52-5b36-a5c6-549daf02c46f", 00:13:21.232 "is_configured": true, 00:13:21.232 "data_offset": 2048, 00:13:21.232 "data_size": 63488 00:13:21.232 } 00:13:21.232 ] 00:13:21.232 }' 00:13:21.232 03:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.232 03:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.509 03:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:21.509 03:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:21.780 [2024-11-05 03:24:35.205506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.718 [2024-11-05 03:24:36.082198] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:22.718 [2024-11-05 03:24:36.082275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.718 [2024-11-05 03:24:36.082611] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.718 "name": "raid_bdev1", 00:13:22.718 "uuid": "7e0b895c-0361-4af8-bbdd-755a01438c3e", 00:13:22.718 "strip_size_kb": 0, 00:13:22.718 "state": "online", 00:13:22.718 "raid_level": "raid1", 00:13:22.718 "superblock": true, 00:13:22.718 "num_base_bdevs": 4, 00:13:22.718 "num_base_bdevs_discovered": 3, 00:13:22.718 "num_base_bdevs_operational": 3, 00:13:22.718 "base_bdevs_list": [ 00:13:22.718 { 00:13:22.718 "name": null, 00:13:22.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.718 "is_configured": false, 00:13:22.718 "data_offset": 0, 00:13:22.718 "data_size": 63488 00:13:22.718 }, 00:13:22.718 { 00:13:22.718 "name": "BaseBdev2", 00:13:22.718 "uuid": "c64718fa-b6a0-506f-b6a4-ee7baa8de84d", 00:13:22.718 "is_configured": true, 00:13:22.718 "data_offset": 2048, 00:13:22.718 "data_size": 63488 00:13:22.718 }, 00:13:22.718 { 00:13:22.718 "name": "BaseBdev3", 00:13:22.718 "uuid": "49a929fc-76ac-5d25-b5ad-a04d87d84ba9", 00:13:22.718 "is_configured": true, 00:13:22.718 "data_offset": 2048, 00:13:22.718 "data_size": 63488 00:13:22.718 }, 00:13:22.718 { 00:13:22.718 "name": "BaseBdev4", 00:13:22.718 "uuid": "3437bbea-de52-5b36-a5c6-549daf02c46f", 00:13:22.718 "is_configured": true, 00:13:22.718 "data_offset": 2048, 00:13:22.718 "data_size": 63488 00:13:22.718 } 00:13:22.718 ] 00:13:22.718 }' 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.718 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.285 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:23.285 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.285 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.285 [2024-11-05 03:24:36.627461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.285 [2024-11-05 03:24:36.627494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.285 [2024-11-05 03:24:36.630963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.285 [2024-11-05 03:24:36.631018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.285 [2024-11-05 03:24:36.631192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.285 [2024-11-05 03:24:36.631218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:23.285 { 00:13:23.285 "results": [ 00:13:23.285 { 00:13:23.285 "job": "raid_bdev1", 00:13:23.285 "core_mask": "0x1", 00:13:23.285 "workload": "randrw", 00:13:23.285 "percentage": 50, 00:13:23.285 "status": "finished", 00:13:23.285 "queue_depth": 1, 00:13:23.285 "io_size": 131072, 00:13:23.285 "runtime": 1.419238, 00:13:23.285 "iops": 8554.590561977624, 00:13:23.285 "mibps": 1069.323820247203, 00:13:23.285 "io_failed": 0, 00:13:23.285 "io_timeout": 0, 00:13:23.285 "avg_latency_us": 112.80567513534155, 00:13:23.285 "min_latency_us": 37.00363636363636, 00:13:23.286 "max_latency_us": 1906.5018181818182 00:13:23.286 } 00:13:23.286 ], 00:13:23.286 "core_count": 1 00:13:23.286 } 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75073 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75073 ']' 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75073 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75073 00:13:23.286 killing process with pid 75073 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75073' 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75073 00:13:23.286 [2024-11-05 03:24:36.666978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.286 03:24:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75073 00:13:23.545 [2024-11-05 03:24:36.934708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0NrlltlEt2 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:24.482 00:13:24.482 real 0m4.756s 00:13:24.482 user 0m5.847s 00:13:24.482 sys 0m0.626s 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:24.482 ************************************ 00:13:24.482 END TEST raid_write_error_test 00:13:24.482 ************************************ 00:13:24.482 03:24:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.482 03:24:38 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:24.482 03:24:38 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:24.482 03:24:38 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:24.482 03:24:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:24.482 03:24:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.482 03:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.482 ************************************ 00:13:24.482 START TEST raid_rebuild_test 00:13:24.482 ************************************ 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:24.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75211 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75211 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75211 ']' 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.482 03:24:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.741 [2024-11-05 03:24:38.151654] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:24.741 [2024-11-05 03:24:38.152118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75211 ] 00:13:24.741 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:24.741 Zero copy mechanism will not be used. 00:13:24.741 [2024-11-05 03:24:38.325813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.000 [2024-11-05 03:24:38.446952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.258 [2024-11-05 03:24:38.638690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.258 [2024-11-05 03:24:38.638988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.584 BaseBdev1_malloc 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.584 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 [2024-11-05 03:24:39.203249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:25.844 [2024-11-05 03:24:39.203371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.844 [2024-11-05 03:24:39.203403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:25.844 [2024-11-05 03:24:39.203421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.844 [2024-11-05 03:24:39.206271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.844 [2024-11-05 03:24:39.206361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:25.844 BaseBdev1 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 BaseBdev2_malloc 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 [2024-11-05 03:24:39.249781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:25.844 [2024-11-05 03:24:39.250060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.844 [2024-11-05 03:24:39.250096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:25.844 [2024-11-05 03:24:39.250116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.844 [2024-11-05 03:24:39.252970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.844 [2024-11-05 03:24:39.253020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:25.844 BaseBdev2 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 spare_malloc 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 spare_delay 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 [2024-11-05 03:24:39.321554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:25.844 [2024-11-05 03:24:39.321657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.844 [2024-11-05 03:24:39.321686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:25.844 [2024-11-05 03:24:39.321703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.844 [2024-11-05 03:24:39.324711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.844 [2024-11-05 03:24:39.324789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:25.844 spare 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 [2024-11-05 03:24:39.329670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.844 [2024-11-05 03:24:39.332394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.844 [2024-11-05 03:24:39.332545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:25.844 [2024-11-05 03:24:39.332569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:25.844 [2024-11-05 03:24:39.332901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:25.844 [2024-11-05 03:24:39.333101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:25.844 [2024-11-05 03:24:39.333120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:25.844 [2024-11-05 03:24:39.333300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.844 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.845 "name": "raid_bdev1", 00:13:25.845 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:25.845 "strip_size_kb": 0, 00:13:25.845 "state": "online", 00:13:25.845 "raid_level": "raid1", 00:13:25.845 "superblock": false, 00:13:25.845 "num_base_bdevs": 2, 00:13:25.845 "num_base_bdevs_discovered": 2, 00:13:25.845 "num_base_bdevs_operational": 2, 00:13:25.845 "base_bdevs_list": [ 00:13:25.845 { 00:13:25.845 "name": "BaseBdev1", 00:13:25.845 "uuid": "49fe59b5-ef49-555f-86cc-0eb3bd295c96", 00:13:25.845 "is_configured": true, 00:13:25.845 "data_offset": 0, 00:13:25.845 "data_size": 65536 00:13:25.845 }, 00:13:25.845 { 00:13:25.845 "name": "BaseBdev2", 00:13:25.845 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:25.845 "is_configured": true, 00:13:25.845 "data_offset": 0, 00:13:25.845 "data_size": 65536 00:13:25.845 } 00:13:25.845 ] 00:13:25.845 }' 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.845 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.414 [2024-11-05 03:24:39.846525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.414 03:24:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:26.673 [2024-11-05 03:24:40.226282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:26.673 /dev/nbd0 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.673 1+0 records in 00:13:26.673 1+0 records out 00:13:26.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279444 s, 14.7 MB/s 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:26.673 03:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:33.244 65536+0 records in 00:13:33.244 65536+0 records out 00:13:33.244 33554432 bytes (34 MB, 32 MiB) copied, 6.15624 s, 5.5 MB/s 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.244 [2024-11-05 03:24:46.727168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.244 [2024-11-05 03:24:46.760547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.244 "name": "raid_bdev1", 00:13:33.244 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:33.244 "strip_size_kb": 0, 00:13:33.244 "state": "online", 00:13:33.244 "raid_level": "raid1", 00:13:33.244 "superblock": false, 00:13:33.244 "num_base_bdevs": 2, 00:13:33.244 "num_base_bdevs_discovered": 1, 00:13:33.244 "num_base_bdevs_operational": 1, 00:13:33.244 "base_bdevs_list": [ 00:13:33.244 { 00:13:33.244 "name": null, 00:13:33.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.244 "is_configured": false, 00:13:33.244 "data_offset": 0, 00:13:33.244 "data_size": 65536 00:13:33.244 }, 00:13:33.244 { 00:13:33.244 "name": "BaseBdev2", 00:13:33.244 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:33.244 "is_configured": true, 00:13:33.244 "data_offset": 0, 00:13:33.244 "data_size": 65536 00:13:33.244 } 00:13:33.244 ] 00:13:33.244 }' 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.244 03:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 03:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.814 03:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.814 03:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 [2024-11-05 03:24:47.272742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.814 [2024-11-05 03:24:47.289837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:33.814 03:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.814 03:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:33.814 [2024-11-05 03:24:47.292220] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.752 "name": "raid_bdev1", 00:13:34.752 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:34.752 "strip_size_kb": 0, 00:13:34.752 "state": "online", 00:13:34.752 "raid_level": "raid1", 00:13:34.752 "superblock": false, 00:13:34.752 "num_base_bdevs": 2, 00:13:34.752 "num_base_bdevs_discovered": 2, 00:13:34.752 "num_base_bdevs_operational": 2, 00:13:34.752 "process": { 00:13:34.752 "type": "rebuild", 00:13:34.752 "target": "spare", 00:13:34.752 "progress": { 00:13:34.752 "blocks": 20480, 00:13:34.752 "percent": 31 00:13:34.752 } 00:13:34.752 }, 00:13:34.752 "base_bdevs_list": [ 00:13:34.752 { 00:13:34.752 "name": "spare", 00:13:34.752 "uuid": "5ee95427-38f4-5660-b9ca-542b59a1f5f9", 00:13:34.752 "is_configured": true, 00:13:34.752 "data_offset": 0, 00:13:34.752 "data_size": 65536 00:13:34.752 }, 00:13:34.752 { 00:13:34.752 "name": "BaseBdev2", 00:13:34.752 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:34.752 "is_configured": true, 00:13:34.752 "data_offset": 0, 00:13:34.752 "data_size": 65536 00:13:34.752 } 00:13:34.752 ] 00:13:34.752 }' 00:13:34.752 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.012 [2024-11-05 03:24:48.469703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.012 [2024-11-05 03:24:48.500759] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.012 [2024-11-05 03:24:48.500894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.012 [2024-11-05 03:24:48.500919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.012 [2024-11-05 03:24:48.500935] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.012 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.013 "name": "raid_bdev1", 00:13:35.013 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:35.013 "strip_size_kb": 0, 00:13:35.013 "state": "online", 00:13:35.013 "raid_level": "raid1", 00:13:35.013 "superblock": false, 00:13:35.013 "num_base_bdevs": 2, 00:13:35.013 "num_base_bdevs_discovered": 1, 00:13:35.013 "num_base_bdevs_operational": 1, 00:13:35.013 "base_bdevs_list": [ 00:13:35.013 { 00:13:35.013 "name": null, 00:13:35.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.013 "is_configured": false, 00:13:35.013 "data_offset": 0, 00:13:35.013 "data_size": 65536 00:13:35.013 }, 00:13:35.013 { 00:13:35.013 "name": "BaseBdev2", 00:13:35.013 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:35.013 "is_configured": true, 00:13:35.013 "data_offset": 0, 00:13:35.013 "data_size": 65536 00:13:35.013 } 00:13:35.013 ] 00:13:35.013 }' 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.013 03:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.582 "name": "raid_bdev1", 00:13:35.582 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:35.582 "strip_size_kb": 0, 00:13:35.582 "state": "online", 00:13:35.582 "raid_level": "raid1", 00:13:35.582 "superblock": false, 00:13:35.582 "num_base_bdevs": 2, 00:13:35.582 "num_base_bdevs_discovered": 1, 00:13:35.582 "num_base_bdevs_operational": 1, 00:13:35.582 "base_bdevs_list": [ 00:13:35.582 { 00:13:35.582 "name": null, 00:13:35.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.582 "is_configured": false, 00:13:35.582 "data_offset": 0, 00:13:35.582 "data_size": 65536 00:13:35.582 }, 00:13:35.582 { 00:13:35.582 "name": "BaseBdev2", 00:13:35.582 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:35.582 "is_configured": true, 00:13:35.582 "data_offset": 0, 00:13:35.582 "data_size": 65536 00:13:35.582 } 00:13:35.582 ] 00:13:35.582 }' 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.582 03:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.582 [2024-11-05 03:24:49.214952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.842 [2024-11-05 03:24:49.230825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:35.842 03:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.842 03:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:35.842 [2024-11-05 03:24:49.233792] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.809 "name": "raid_bdev1", 00:13:36.809 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:36.809 "strip_size_kb": 0, 00:13:36.809 "state": "online", 00:13:36.809 "raid_level": "raid1", 00:13:36.809 "superblock": false, 00:13:36.809 "num_base_bdevs": 2, 00:13:36.809 "num_base_bdevs_discovered": 2, 00:13:36.809 "num_base_bdevs_operational": 2, 00:13:36.809 "process": { 00:13:36.809 "type": "rebuild", 00:13:36.809 "target": "spare", 00:13:36.809 "progress": { 00:13:36.809 "blocks": 20480, 00:13:36.809 "percent": 31 00:13:36.809 } 00:13:36.809 }, 00:13:36.809 "base_bdevs_list": [ 00:13:36.809 { 00:13:36.809 "name": "spare", 00:13:36.809 "uuid": "5ee95427-38f4-5660-b9ca-542b59a1f5f9", 00:13:36.809 "is_configured": true, 00:13:36.809 "data_offset": 0, 00:13:36.809 "data_size": 65536 00:13:36.809 }, 00:13:36.809 { 00:13:36.809 "name": "BaseBdev2", 00:13:36.809 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:36.809 "is_configured": true, 00:13:36.809 "data_offset": 0, 00:13:36.809 "data_size": 65536 00:13:36.809 } 00:13:36.809 ] 00:13:36.809 }' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.809 03:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.069 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.069 "name": "raid_bdev1", 00:13:37.069 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:37.069 "strip_size_kb": 0, 00:13:37.069 "state": "online", 00:13:37.069 "raid_level": "raid1", 00:13:37.069 "superblock": false, 00:13:37.069 "num_base_bdevs": 2, 00:13:37.069 "num_base_bdevs_discovered": 2, 00:13:37.069 "num_base_bdevs_operational": 2, 00:13:37.069 "process": { 00:13:37.069 "type": "rebuild", 00:13:37.069 "target": "spare", 00:13:37.069 "progress": { 00:13:37.069 "blocks": 22528, 00:13:37.069 "percent": 34 00:13:37.069 } 00:13:37.069 }, 00:13:37.069 "base_bdevs_list": [ 00:13:37.069 { 00:13:37.069 "name": "spare", 00:13:37.069 "uuid": "5ee95427-38f4-5660-b9ca-542b59a1f5f9", 00:13:37.069 "is_configured": true, 00:13:37.069 "data_offset": 0, 00:13:37.069 "data_size": 65536 00:13:37.069 }, 00:13:37.069 { 00:13:37.069 "name": "BaseBdev2", 00:13:37.069 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:37.069 "is_configured": true, 00:13:37.069 "data_offset": 0, 00:13:37.069 "data_size": 65536 00:13:37.069 } 00:13:37.069 ] 00:13:37.069 }' 00:13:37.069 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.069 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.069 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.069 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.069 03:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.008 "name": "raid_bdev1", 00:13:38.008 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:38.008 "strip_size_kb": 0, 00:13:38.008 "state": "online", 00:13:38.008 "raid_level": "raid1", 00:13:38.008 "superblock": false, 00:13:38.008 "num_base_bdevs": 2, 00:13:38.008 "num_base_bdevs_discovered": 2, 00:13:38.008 "num_base_bdevs_operational": 2, 00:13:38.008 "process": { 00:13:38.008 "type": "rebuild", 00:13:38.008 "target": "spare", 00:13:38.008 "progress": { 00:13:38.008 "blocks": 47104, 00:13:38.008 "percent": 71 00:13:38.008 } 00:13:38.008 }, 00:13:38.008 "base_bdevs_list": [ 00:13:38.008 { 00:13:38.008 "name": "spare", 00:13:38.008 "uuid": "5ee95427-38f4-5660-b9ca-542b59a1f5f9", 00:13:38.008 "is_configured": true, 00:13:38.008 "data_offset": 0, 00:13:38.008 "data_size": 65536 00:13:38.008 }, 00:13:38.008 { 00:13:38.008 "name": "BaseBdev2", 00:13:38.008 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:38.008 "is_configured": true, 00:13:38.008 "data_offset": 0, 00:13:38.008 "data_size": 65536 00:13:38.008 } 00:13:38.008 ] 00:13:38.008 }' 00:13:38.008 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.268 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.268 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.268 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.268 03:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.836 [2024-11-05 03:24:52.457189] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:38.836 [2024-11-05 03:24:52.457312] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:38.836 [2024-11-05 03:24:52.457426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.096 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.096 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.096 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.096 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.096 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.096 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.356 "name": "raid_bdev1", 00:13:39.356 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:39.356 "strip_size_kb": 0, 00:13:39.356 "state": "online", 00:13:39.356 "raid_level": "raid1", 00:13:39.356 "superblock": false, 00:13:39.356 "num_base_bdevs": 2, 00:13:39.356 "num_base_bdevs_discovered": 2, 00:13:39.356 "num_base_bdevs_operational": 2, 00:13:39.356 "base_bdevs_list": [ 00:13:39.356 { 00:13:39.356 "name": "spare", 00:13:39.356 "uuid": "5ee95427-38f4-5660-b9ca-542b59a1f5f9", 00:13:39.356 "is_configured": true, 00:13:39.356 "data_offset": 0, 00:13:39.356 "data_size": 65536 00:13:39.356 }, 00:13:39.356 { 00:13:39.356 "name": "BaseBdev2", 00:13:39.356 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:39.356 "is_configured": true, 00:13:39.356 "data_offset": 0, 00:13:39.356 "data_size": 65536 00:13:39.356 } 00:13:39.356 ] 00:13:39.356 }' 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.356 "name": "raid_bdev1", 00:13:39.356 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:39.356 "strip_size_kb": 0, 00:13:39.356 "state": "online", 00:13:39.356 "raid_level": "raid1", 00:13:39.356 "superblock": false, 00:13:39.356 "num_base_bdevs": 2, 00:13:39.356 "num_base_bdevs_discovered": 2, 00:13:39.356 "num_base_bdevs_operational": 2, 00:13:39.356 "base_bdevs_list": [ 00:13:39.356 { 00:13:39.356 "name": "spare", 00:13:39.356 "uuid": "5ee95427-38f4-5660-b9ca-542b59a1f5f9", 00:13:39.356 "is_configured": true, 00:13:39.356 "data_offset": 0, 00:13:39.356 "data_size": 65536 00:13:39.356 }, 00:13:39.356 { 00:13:39.356 "name": "BaseBdev2", 00:13:39.356 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:39.356 "is_configured": true, 00:13:39.356 "data_offset": 0, 00:13:39.356 "data_size": 65536 00:13:39.356 } 00:13:39.356 ] 00:13:39.356 }' 00:13:39.356 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.616 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.616 03:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.616 "name": "raid_bdev1", 00:13:39.616 "uuid": "58da9cf0-208d-43f0-ab25-32c0eb0cce49", 00:13:39.616 "strip_size_kb": 0, 00:13:39.616 "state": "online", 00:13:39.616 "raid_level": "raid1", 00:13:39.616 "superblock": false, 00:13:39.616 "num_base_bdevs": 2, 00:13:39.616 "num_base_bdevs_discovered": 2, 00:13:39.616 "num_base_bdevs_operational": 2, 00:13:39.616 "base_bdevs_list": [ 00:13:39.616 { 00:13:39.616 "name": "spare", 00:13:39.616 "uuid": "5ee95427-38f4-5660-b9ca-542b59a1f5f9", 00:13:39.616 "is_configured": true, 00:13:39.616 "data_offset": 0, 00:13:39.616 "data_size": 65536 00:13:39.616 }, 00:13:39.616 { 00:13:39.616 "name": "BaseBdev2", 00:13:39.616 "uuid": "452c750e-0a2e-5622-b8e7-7b7310b8b2a8", 00:13:39.616 "is_configured": true, 00:13:39.616 "data_offset": 0, 00:13:39.616 "data_size": 65536 00:13:39.616 } 00:13:39.616 ] 00:13:39.616 }' 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.616 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.185 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.185 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.185 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.185 [2024-11-05 03:24:53.585145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.185 [2024-11-05 03:24:53.585374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.185 [2024-11-05 03:24:53.585493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.185 [2024-11-05 03:24:53.585585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.185 [2024-11-05 03:24:53.585615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:40.185 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.185 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.186 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:40.445 /dev/nbd0 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.445 1+0 records in 00:13:40.445 1+0 records out 00:13:40.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270368 s, 15.1 MB/s 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.445 03:24:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:40.705 /dev/nbd1 00:13:40.705 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.706 1+0 records in 00:13:40.706 1+0 records out 00:13:40.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037024 s, 11.1 MB/s 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.706 03:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:40.965 03:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:40.965 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.965 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.965 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.965 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:40.965 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.965 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.226 03:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75211 00:13:41.795 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75211 ']' 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75211 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75211 00:13:41.796 killing process with pid 75211 00:13:41.796 Received shutdown signal, test time was about 60.000000 seconds 00:13:41.796 00:13:41.796 Latency(us) 00:13:41.796 [2024-11-05T03:24:55.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.796 [2024-11-05T03:24:55.435Z] =================================================================================================================== 00:13:41.796 [2024-11-05T03:24:55.435Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75211' 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75211 00:13:41.796 [2024-11-05 03:24:55.190515] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.796 03:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75211 00:13:42.055 [2024-11-05 03:24:55.448001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:42.995 00:13:42.995 real 0m18.381s 00:13:42.995 user 0m21.011s 00:13:42.995 sys 0m3.415s 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.995 ************************************ 00:13:42.995 END TEST raid_rebuild_test 00:13:42.995 ************************************ 00:13:42.995 03:24:56 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:42.995 03:24:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:42.995 03:24:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:42.995 03:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.995 ************************************ 00:13:42.995 START TEST raid_rebuild_test_sb 00:13:42.995 ************************************ 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75662 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75662 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75662 ']' 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:42.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:42.995 03:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.995 [2024-11-05 03:24:56.614915] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:42.995 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:42.995 Zero copy mechanism will not be used. 00:13:42.995 [2024-11-05 03:24:56.615124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:13:43.255 [2024-11-05 03:24:56.800358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.514 [2024-11-05 03:24:56.921016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.514 [2024-11-05 03:24:57.117126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.514 [2024-11-05 03:24:57.117196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.084 BaseBdev1_malloc 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.084 [2024-11-05 03:24:57.627841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:44.084 [2024-11-05 03:24:57.627954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.084 [2024-11-05 03:24:57.627984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:44.084 [2024-11-05 03:24:57.628001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.084 [2024-11-05 03:24:57.630846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.084 [2024-11-05 03:24:57.630926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:44.084 BaseBdev1 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.084 BaseBdev2_malloc 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.084 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.084 [2024-11-05 03:24:57.672771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:44.084 [2024-11-05 03:24:57.672845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.084 [2024-11-05 03:24:57.672882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:44.084 [2024-11-05 03:24:57.672902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.084 [2024-11-05 03:24:57.675713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.085 [2024-11-05 03:24:57.675759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:44.085 BaseBdev2 00:13:44.085 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.085 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:44.085 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.085 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 spare_malloc 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 spare_delay 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 [2024-11-05 03:24:57.753418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.344 [2024-11-05 03:24:57.753527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.344 [2024-11-05 03:24:57.753577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:44.344 [2024-11-05 03:24:57.753632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.344 [2024-11-05 03:24:57.757584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.344 [2024-11-05 03:24:57.757678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.344 spare 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 [2024-11-05 03:24:57.762077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.344 [2024-11-05 03:24:57.765415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.344 [2024-11-05 03:24:57.765803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:44.344 [2024-11-05 03:24:57.765858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.344 [2024-11-05 03:24:57.766358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:44.344 [2024-11-05 03:24:57.766837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:44.344 [2024-11-05 03:24:57.766875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:44.344 [2024-11-05 03:24:57.767257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.344 "name": "raid_bdev1", 00:13:44.344 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:44.344 "strip_size_kb": 0, 00:13:44.344 "state": "online", 00:13:44.344 "raid_level": "raid1", 00:13:44.344 "superblock": true, 00:13:44.344 "num_base_bdevs": 2, 00:13:44.344 "num_base_bdevs_discovered": 2, 00:13:44.344 "num_base_bdevs_operational": 2, 00:13:44.344 "base_bdevs_list": [ 00:13:44.344 { 00:13:44.344 "name": "BaseBdev1", 00:13:44.344 "uuid": "e0edd546-a298-5259-9f87-da5be4addbec", 00:13:44.344 "is_configured": true, 00:13:44.344 "data_offset": 2048, 00:13:44.344 "data_size": 63488 00:13:44.344 }, 00:13:44.344 { 00:13:44.344 "name": "BaseBdev2", 00:13:44.344 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:44.344 "is_configured": true, 00:13:44.344 "data_offset": 2048, 00:13:44.344 "data_size": 63488 00:13:44.344 } 00:13:44.344 ] 00:13:44.344 }' 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.344 03:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.913 [2024-11-05 03:24:58.310618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:44.913 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.914 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:44.914 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.914 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.914 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:45.173 [2024-11-05 03:24:58.686386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:45.173 /dev/nbd0 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.173 1+0 records in 00:13:45.173 1+0 records out 00:13:45.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329906 s, 12.4 MB/s 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:45.173 03:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:51.749 63488+0 records in 00:13:51.749 63488+0 records out 00:13:51.749 32505856 bytes (33 MB, 31 MiB) copied, 6.39725 s, 5.1 MB/s 00:13:51.749 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:51.749 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.749 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:51.749 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.749 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:51.749 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.749 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:52.009 [2024-11-05 03:25:05.477196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 [2024-11-05 03:25:05.490433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.009 "name": "raid_bdev1", 00:13:52.009 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:52.009 "strip_size_kb": 0, 00:13:52.009 "state": "online", 00:13:52.009 "raid_level": "raid1", 00:13:52.009 "superblock": true, 00:13:52.009 "num_base_bdevs": 2, 00:13:52.009 "num_base_bdevs_discovered": 1, 00:13:52.009 "num_base_bdevs_operational": 1, 00:13:52.009 "base_bdevs_list": [ 00:13:52.009 { 00:13:52.009 "name": null, 00:13:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.009 "is_configured": false, 00:13:52.009 "data_offset": 0, 00:13:52.009 "data_size": 63488 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "name": "BaseBdev2", 00:13:52.009 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:52.009 "is_configured": true, 00:13:52.009 "data_offset": 2048, 00:13:52.009 "data_size": 63488 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }' 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.009 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.578 03:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.578 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.578 03:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.578 [2024-11-05 03:25:05.998712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.578 [2024-11-05 03:25:06.014512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:52.578 03:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.578 03:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:52.579 [2024-11-05 03:25:06.017336] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.518 "name": "raid_bdev1", 00:13:53.518 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:53.518 "strip_size_kb": 0, 00:13:53.518 "state": "online", 00:13:53.518 "raid_level": "raid1", 00:13:53.518 "superblock": true, 00:13:53.518 "num_base_bdevs": 2, 00:13:53.518 "num_base_bdevs_discovered": 2, 00:13:53.518 "num_base_bdevs_operational": 2, 00:13:53.518 "process": { 00:13:53.518 "type": "rebuild", 00:13:53.518 "target": "spare", 00:13:53.518 "progress": { 00:13:53.518 "blocks": 20480, 00:13:53.518 "percent": 32 00:13:53.518 } 00:13:53.518 }, 00:13:53.518 "base_bdevs_list": [ 00:13:53.518 { 00:13:53.518 "name": "spare", 00:13:53.518 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:13:53.518 "is_configured": true, 00:13:53.518 "data_offset": 2048, 00:13:53.518 "data_size": 63488 00:13:53.518 }, 00:13:53.518 { 00:13:53.518 "name": "BaseBdev2", 00:13:53.518 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:53.518 "is_configured": true, 00:13:53.518 "data_offset": 2048, 00:13:53.518 "data_size": 63488 00:13:53.518 } 00:13:53.518 ] 00:13:53.518 }' 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.518 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.778 [2024-11-05 03:25:07.187031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.778 [2024-11-05 03:25:07.226416] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:53.778 [2024-11-05 03:25:07.226546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.778 [2024-11-05 03:25:07.226570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.778 [2024-11-05 03:25:07.226585] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.778 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.779 "name": "raid_bdev1", 00:13:53.779 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:53.779 "strip_size_kb": 0, 00:13:53.779 "state": "online", 00:13:53.779 "raid_level": "raid1", 00:13:53.779 "superblock": true, 00:13:53.779 "num_base_bdevs": 2, 00:13:53.779 "num_base_bdevs_discovered": 1, 00:13:53.779 "num_base_bdevs_operational": 1, 00:13:53.779 "base_bdevs_list": [ 00:13:53.779 { 00:13:53.779 "name": null, 00:13:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.779 "is_configured": false, 00:13:53.779 "data_offset": 0, 00:13:53.779 "data_size": 63488 00:13:53.779 }, 00:13:53.779 { 00:13:53.779 "name": "BaseBdev2", 00:13:53.779 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:53.779 "is_configured": true, 00:13:53.779 "data_offset": 2048, 00:13:53.779 "data_size": 63488 00:13:53.779 } 00:13:53.779 ] 00:13:53.779 }' 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.779 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.348 "name": "raid_bdev1", 00:13:54.348 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:54.348 "strip_size_kb": 0, 00:13:54.348 "state": "online", 00:13:54.348 "raid_level": "raid1", 00:13:54.348 "superblock": true, 00:13:54.348 "num_base_bdevs": 2, 00:13:54.348 "num_base_bdevs_discovered": 1, 00:13:54.348 "num_base_bdevs_operational": 1, 00:13:54.348 "base_bdevs_list": [ 00:13:54.348 { 00:13:54.348 "name": null, 00:13:54.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.348 "is_configured": false, 00:13:54.348 "data_offset": 0, 00:13:54.348 "data_size": 63488 00:13:54.348 }, 00:13:54.348 { 00:13:54.348 "name": "BaseBdev2", 00:13:54.348 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:54.348 "is_configured": true, 00:13:54.348 "data_offset": 2048, 00:13:54.348 "data_size": 63488 00:13:54.348 } 00:13:54.348 ] 00:13:54.348 }' 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.348 [2024-11-05 03:25:07.946849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.348 [2024-11-05 03:25:07.963755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.348 03:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:54.348 [2024-11-05 03:25:07.966527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.728 03:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.728 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.728 "name": "raid_bdev1", 00:13:55.728 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:55.728 "strip_size_kb": 0, 00:13:55.728 "state": "online", 00:13:55.728 "raid_level": "raid1", 00:13:55.728 "superblock": true, 00:13:55.728 "num_base_bdevs": 2, 00:13:55.728 "num_base_bdevs_discovered": 2, 00:13:55.728 "num_base_bdevs_operational": 2, 00:13:55.728 "process": { 00:13:55.728 "type": "rebuild", 00:13:55.728 "target": "spare", 00:13:55.728 "progress": { 00:13:55.728 "blocks": 20480, 00:13:55.728 "percent": 32 00:13:55.728 } 00:13:55.728 }, 00:13:55.728 "base_bdevs_list": [ 00:13:55.728 { 00:13:55.728 "name": "spare", 00:13:55.728 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:13:55.728 "is_configured": true, 00:13:55.728 "data_offset": 2048, 00:13:55.728 "data_size": 63488 00:13:55.728 }, 00:13:55.728 { 00:13:55.728 "name": "BaseBdev2", 00:13:55.728 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:55.728 "is_configured": true, 00:13:55.728 "data_offset": 2048, 00:13:55.728 "data_size": 63488 00:13:55.728 } 00:13:55.728 ] 00:13:55.728 }' 00:13:55.728 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.728 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.728 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.728 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:55.729 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.729 "name": "raid_bdev1", 00:13:55.729 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:55.729 "strip_size_kb": 0, 00:13:55.729 "state": "online", 00:13:55.729 "raid_level": "raid1", 00:13:55.729 "superblock": true, 00:13:55.729 "num_base_bdevs": 2, 00:13:55.729 "num_base_bdevs_discovered": 2, 00:13:55.729 "num_base_bdevs_operational": 2, 00:13:55.729 "process": { 00:13:55.729 "type": "rebuild", 00:13:55.729 "target": "spare", 00:13:55.729 "progress": { 00:13:55.729 "blocks": 22528, 00:13:55.729 "percent": 35 00:13:55.729 } 00:13:55.729 }, 00:13:55.729 "base_bdevs_list": [ 00:13:55.729 { 00:13:55.729 "name": "spare", 00:13:55.729 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:13:55.729 "is_configured": true, 00:13:55.729 "data_offset": 2048, 00:13:55.729 "data_size": 63488 00:13:55.729 }, 00:13:55.729 { 00:13:55.729 "name": "BaseBdev2", 00:13:55.729 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:55.729 "is_configured": true, 00:13:55.729 "data_offset": 2048, 00:13:55.729 "data_size": 63488 00:13:55.729 } 00:13:55.729 ] 00:13:55.729 }' 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.729 03:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.108 "name": "raid_bdev1", 00:13:57.108 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:57.108 "strip_size_kb": 0, 00:13:57.108 "state": "online", 00:13:57.108 "raid_level": "raid1", 00:13:57.108 "superblock": true, 00:13:57.108 "num_base_bdevs": 2, 00:13:57.108 "num_base_bdevs_discovered": 2, 00:13:57.108 "num_base_bdevs_operational": 2, 00:13:57.108 "process": { 00:13:57.108 "type": "rebuild", 00:13:57.108 "target": "spare", 00:13:57.108 "progress": { 00:13:57.108 "blocks": 47104, 00:13:57.108 "percent": 74 00:13:57.108 } 00:13:57.108 }, 00:13:57.108 "base_bdevs_list": [ 00:13:57.108 { 00:13:57.108 "name": "spare", 00:13:57.108 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:13:57.108 "is_configured": true, 00:13:57.108 "data_offset": 2048, 00:13:57.108 "data_size": 63488 00:13:57.108 }, 00:13:57.108 { 00:13:57.108 "name": "BaseBdev2", 00:13:57.108 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:57.108 "is_configured": true, 00:13:57.108 "data_offset": 2048, 00:13:57.108 "data_size": 63488 00:13:57.108 } 00:13:57.108 ] 00:13:57.108 }' 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.108 03:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.677 [2024-11-05 03:25:11.089168] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:57.677 [2024-11-05 03:25:11.089282] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:57.677 [2024-11-05 03:25:11.089463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.947 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.947 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.947 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.947 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.947 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.947 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.948 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.948 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.948 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.948 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.948 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.948 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.948 "name": "raid_bdev1", 00:13:57.948 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:57.948 "strip_size_kb": 0, 00:13:57.948 "state": "online", 00:13:57.948 "raid_level": "raid1", 00:13:57.948 "superblock": true, 00:13:57.948 "num_base_bdevs": 2, 00:13:57.948 "num_base_bdevs_discovered": 2, 00:13:57.948 "num_base_bdevs_operational": 2, 00:13:57.948 "base_bdevs_list": [ 00:13:57.948 { 00:13:57.948 "name": "spare", 00:13:57.948 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:13:57.948 "is_configured": true, 00:13:57.948 "data_offset": 2048, 00:13:57.948 "data_size": 63488 00:13:57.948 }, 00:13:57.948 { 00:13:57.948 "name": "BaseBdev2", 00:13:57.948 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:57.948 "is_configured": true, 00:13:57.948 "data_offset": 2048, 00:13:57.948 "data_size": 63488 00:13:57.948 } 00:13:57.948 ] 00:13:57.948 }' 00:13:57.948 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.208 "name": "raid_bdev1", 00:13:58.208 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:58.208 "strip_size_kb": 0, 00:13:58.208 "state": "online", 00:13:58.208 "raid_level": "raid1", 00:13:58.208 "superblock": true, 00:13:58.208 "num_base_bdevs": 2, 00:13:58.208 "num_base_bdevs_discovered": 2, 00:13:58.208 "num_base_bdevs_operational": 2, 00:13:58.208 "base_bdevs_list": [ 00:13:58.208 { 00:13:58.208 "name": "spare", 00:13:58.208 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:13:58.208 "is_configured": true, 00:13:58.208 "data_offset": 2048, 00:13:58.208 "data_size": 63488 00:13:58.208 }, 00:13:58.208 { 00:13:58.208 "name": "BaseBdev2", 00:13:58.208 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:58.208 "is_configured": true, 00:13:58.208 "data_offset": 2048, 00:13:58.208 "data_size": 63488 00:13:58.208 } 00:13:58.208 ] 00:13:58.208 }' 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.208 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.209 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.209 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.469 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.469 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.469 "name": "raid_bdev1", 00:13:58.469 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:13:58.469 "strip_size_kb": 0, 00:13:58.469 "state": "online", 00:13:58.469 "raid_level": "raid1", 00:13:58.469 "superblock": true, 00:13:58.469 "num_base_bdevs": 2, 00:13:58.469 "num_base_bdevs_discovered": 2, 00:13:58.469 "num_base_bdevs_operational": 2, 00:13:58.469 "base_bdevs_list": [ 00:13:58.469 { 00:13:58.469 "name": "spare", 00:13:58.469 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:13:58.469 "is_configured": true, 00:13:58.469 "data_offset": 2048, 00:13:58.469 "data_size": 63488 00:13:58.469 }, 00:13:58.469 { 00:13:58.469 "name": "BaseBdev2", 00:13:58.469 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:13:58.469 "is_configured": true, 00:13:58.469 "data_offset": 2048, 00:13:58.469 "data_size": 63488 00:13:58.469 } 00:13:58.469 ] 00:13:58.469 }' 00:13:58.469 03:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.469 03:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.728 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:58.728 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.728 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.988 [2024-11-05 03:25:12.366848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.988 [2024-11-05 03:25:12.366907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.988 [2024-11-05 03:25:12.366997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.988 [2024-11-05 03:25:12.367092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.988 [2024-11-05 03:25:12.367140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:58.988 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:59.248 /dev/nbd0 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.248 1+0 records in 00:13:59.248 1+0 records out 00:13:59.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329027 s, 12.4 MB/s 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.248 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:59.249 03:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:59.509 /dev/nbd1 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.509 1+0 records in 00:13:59.509 1+0 records out 00:13:59.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460821 s, 8.9 MB/s 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:59.509 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:59.769 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:59.769 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.769 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:59.770 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:59.770 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:59.770 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.770 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.029 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.326 [2024-11-05 03:25:13.836281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.326 [2024-11-05 03:25:13.836418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.326 [2024-11-05 03:25:13.836454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:00.326 [2024-11-05 03:25:13.836470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.326 [2024-11-05 03:25:13.839534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.326 [2024-11-05 03:25:13.839578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.326 [2024-11-05 03:25:13.839739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:00.326 [2024-11-05 03:25:13.839805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.326 [2024-11-05 03:25:13.840004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.326 spare 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.326 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.326 [2024-11-05 03:25:13.940145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:00.326 [2024-11-05 03:25:13.940212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:00.326 [2024-11-05 03:25:13.940722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:00.327 [2024-11-05 03:25:13.941007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:00.327 [2024-11-05 03:25:13.941035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:00.327 [2024-11-05 03:25:13.941284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.327 03:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.586 03:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.586 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.586 "name": "raid_bdev1", 00:14:00.586 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:00.586 "strip_size_kb": 0, 00:14:00.586 "state": "online", 00:14:00.586 "raid_level": "raid1", 00:14:00.586 "superblock": true, 00:14:00.586 "num_base_bdevs": 2, 00:14:00.586 "num_base_bdevs_discovered": 2, 00:14:00.586 "num_base_bdevs_operational": 2, 00:14:00.586 "base_bdevs_list": [ 00:14:00.586 { 00:14:00.586 "name": "spare", 00:14:00.586 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:14:00.586 "is_configured": true, 00:14:00.586 "data_offset": 2048, 00:14:00.586 "data_size": 63488 00:14:00.586 }, 00:14:00.586 { 00:14:00.586 "name": "BaseBdev2", 00:14:00.586 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:00.586 "is_configured": true, 00:14:00.586 "data_offset": 2048, 00:14:00.586 "data_size": 63488 00:14:00.586 } 00:14:00.586 ] 00:14:00.586 }' 00:14:00.586 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.586 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.845 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.845 "name": "raid_bdev1", 00:14:00.845 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:00.845 "strip_size_kb": 0, 00:14:00.845 "state": "online", 00:14:00.845 "raid_level": "raid1", 00:14:00.845 "superblock": true, 00:14:00.845 "num_base_bdevs": 2, 00:14:00.845 "num_base_bdevs_discovered": 2, 00:14:00.845 "num_base_bdevs_operational": 2, 00:14:00.845 "base_bdevs_list": [ 00:14:00.845 { 00:14:00.845 "name": "spare", 00:14:00.845 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:14:00.845 "is_configured": true, 00:14:00.845 "data_offset": 2048, 00:14:00.845 "data_size": 63488 00:14:00.845 }, 00:14:00.845 { 00:14:00.845 "name": "BaseBdev2", 00:14:00.845 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:00.845 "is_configured": true, 00:14:00.845 "data_offset": 2048, 00:14:00.845 "data_size": 63488 00:14:00.845 } 00:14:00.845 ] 00:14:00.845 }' 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.104 [2024-11-05 03:25:14.649457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.104 "name": "raid_bdev1", 00:14:01.104 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:01.104 "strip_size_kb": 0, 00:14:01.104 "state": "online", 00:14:01.104 "raid_level": "raid1", 00:14:01.104 "superblock": true, 00:14:01.104 "num_base_bdevs": 2, 00:14:01.104 "num_base_bdevs_discovered": 1, 00:14:01.104 "num_base_bdevs_operational": 1, 00:14:01.104 "base_bdevs_list": [ 00:14:01.104 { 00:14:01.104 "name": null, 00:14:01.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.104 "is_configured": false, 00:14:01.104 "data_offset": 0, 00:14:01.104 "data_size": 63488 00:14:01.104 }, 00:14:01.104 { 00:14:01.104 "name": "BaseBdev2", 00:14:01.104 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:01.104 "is_configured": true, 00:14:01.104 "data_offset": 2048, 00:14:01.104 "data_size": 63488 00:14:01.104 } 00:14:01.104 ] 00:14:01.104 }' 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.104 03:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.671 03:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.671 03:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.671 03:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.671 [2024-11-05 03:25:15.169637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.671 [2024-11-05 03:25:15.169875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:01.671 [2024-11-05 03:25:15.169932] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:01.671 [2024-11-05 03:25:15.170032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.672 [2024-11-05 03:25:15.185254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:01.672 03:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.672 03:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:01.672 [2024-11-05 03:25:15.188138] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.608 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.867 "name": "raid_bdev1", 00:14:02.867 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:02.867 "strip_size_kb": 0, 00:14:02.867 "state": "online", 00:14:02.867 "raid_level": "raid1", 00:14:02.867 "superblock": true, 00:14:02.867 "num_base_bdevs": 2, 00:14:02.867 "num_base_bdevs_discovered": 2, 00:14:02.867 "num_base_bdevs_operational": 2, 00:14:02.867 "process": { 00:14:02.867 "type": "rebuild", 00:14:02.867 "target": "spare", 00:14:02.867 "progress": { 00:14:02.867 "blocks": 20480, 00:14:02.867 "percent": 32 00:14:02.867 } 00:14:02.867 }, 00:14:02.867 "base_bdevs_list": [ 00:14:02.867 { 00:14:02.867 "name": "spare", 00:14:02.867 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:14:02.867 "is_configured": true, 00:14:02.867 "data_offset": 2048, 00:14:02.867 "data_size": 63488 00:14:02.867 }, 00:14:02.867 { 00:14:02.867 "name": "BaseBdev2", 00:14:02.867 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:02.867 "is_configured": true, 00:14:02.867 "data_offset": 2048, 00:14:02.867 "data_size": 63488 00:14:02.867 } 00:14:02.867 ] 00:14:02.867 }' 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.867 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.867 [2024-11-05 03:25:16.361116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.868 [2024-11-05 03:25:16.397001] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.868 [2024-11-05 03:25:16.397105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.868 [2024-11-05 03:25:16.397127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.868 [2024-11-05 03:25:16.397141] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.868 "name": "raid_bdev1", 00:14:02.868 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:02.868 "strip_size_kb": 0, 00:14:02.868 "state": "online", 00:14:02.868 "raid_level": "raid1", 00:14:02.868 "superblock": true, 00:14:02.868 "num_base_bdevs": 2, 00:14:02.868 "num_base_bdevs_discovered": 1, 00:14:02.868 "num_base_bdevs_operational": 1, 00:14:02.868 "base_bdevs_list": [ 00:14:02.868 { 00:14:02.868 "name": null, 00:14:02.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.868 "is_configured": false, 00:14:02.868 "data_offset": 0, 00:14:02.868 "data_size": 63488 00:14:02.868 }, 00:14:02.868 { 00:14:02.868 "name": "BaseBdev2", 00:14:02.868 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:02.868 "is_configured": true, 00:14:02.868 "data_offset": 2048, 00:14:02.868 "data_size": 63488 00:14:02.868 } 00:14:02.868 ] 00:14:02.868 }' 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.868 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:03.435 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.435 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.435 [2024-11-05 03:25:16.960045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:03.436 [2024-11-05 03:25:16.960160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.436 [2024-11-05 03:25:16.960195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:03.436 [2024-11-05 03:25:16.960211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.436 [2024-11-05 03:25:16.960879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.436 [2024-11-05 03:25:16.960954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:03.436 [2024-11-05 03:25:16.961066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:03.436 [2024-11-05 03:25:16.961089] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:03.436 [2024-11-05 03:25:16.961102] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:03.436 [2024-11-05 03:25:16.961136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.436 [2024-11-05 03:25:16.976266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:03.436 spare 00:14:03.436 03:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.436 03:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:03.436 [2024-11-05 03:25:16.979080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.375 03:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.375 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.645 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.645 "name": "raid_bdev1", 00:14:04.645 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:04.645 "strip_size_kb": 0, 00:14:04.645 "state": "online", 00:14:04.645 "raid_level": "raid1", 00:14:04.645 "superblock": true, 00:14:04.645 "num_base_bdevs": 2, 00:14:04.645 "num_base_bdevs_discovered": 2, 00:14:04.645 "num_base_bdevs_operational": 2, 00:14:04.645 "process": { 00:14:04.645 "type": "rebuild", 00:14:04.645 "target": "spare", 00:14:04.646 "progress": { 00:14:04.646 "blocks": 20480, 00:14:04.646 "percent": 32 00:14:04.646 } 00:14:04.646 }, 00:14:04.646 "base_bdevs_list": [ 00:14:04.646 { 00:14:04.646 "name": "spare", 00:14:04.646 "uuid": "19143b85-f043-507d-9e92-be2b6995241a", 00:14:04.646 "is_configured": true, 00:14:04.646 "data_offset": 2048, 00:14:04.646 "data_size": 63488 00:14:04.646 }, 00:14:04.646 { 00:14:04.646 "name": "BaseBdev2", 00:14:04.646 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:04.646 "is_configured": true, 00:14:04.646 "data_offset": 2048, 00:14:04.646 "data_size": 63488 00:14:04.646 } 00:14:04.646 ] 00:14:04.646 }' 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.646 [2024-11-05 03:25:18.148561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.646 [2024-11-05 03:25:18.188492] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.646 [2024-11-05 03:25:18.188625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.646 [2024-11-05 03:25:18.188654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.646 [2024-11-05 03:25:18.188665] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.646 "name": "raid_bdev1", 00:14:04.646 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:04.646 "strip_size_kb": 0, 00:14:04.646 "state": "online", 00:14:04.646 "raid_level": "raid1", 00:14:04.646 "superblock": true, 00:14:04.646 "num_base_bdevs": 2, 00:14:04.646 "num_base_bdevs_discovered": 1, 00:14:04.646 "num_base_bdevs_operational": 1, 00:14:04.646 "base_bdevs_list": [ 00:14:04.646 { 00:14:04.646 "name": null, 00:14:04.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.646 "is_configured": false, 00:14:04.646 "data_offset": 0, 00:14:04.646 "data_size": 63488 00:14:04.646 }, 00:14:04.646 { 00:14:04.646 "name": "BaseBdev2", 00:14:04.646 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:04.646 "is_configured": true, 00:14:04.646 "data_offset": 2048, 00:14:04.646 "data_size": 63488 00:14:04.646 } 00:14:04.646 ] 00:14:04.646 }' 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.646 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.224 "name": "raid_bdev1", 00:14:05.224 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:05.224 "strip_size_kb": 0, 00:14:05.224 "state": "online", 00:14:05.224 "raid_level": "raid1", 00:14:05.224 "superblock": true, 00:14:05.224 "num_base_bdevs": 2, 00:14:05.224 "num_base_bdevs_discovered": 1, 00:14:05.224 "num_base_bdevs_operational": 1, 00:14:05.224 "base_bdevs_list": [ 00:14:05.224 { 00:14:05.224 "name": null, 00:14:05.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.224 "is_configured": false, 00:14:05.224 "data_offset": 0, 00:14:05.224 "data_size": 63488 00:14:05.224 }, 00:14:05.224 { 00:14:05.224 "name": "BaseBdev2", 00:14:05.224 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:05.224 "is_configured": true, 00:14:05.224 "data_offset": 2048, 00:14:05.224 "data_size": 63488 00:14:05.224 } 00:14:05.224 ] 00:14:05.224 }' 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.224 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.483 [2024-11-05 03:25:18.920405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:05.483 [2024-11-05 03:25:18.920469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.483 [2024-11-05 03:25:18.920500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:05.483 [2024-11-05 03:25:18.920525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.483 [2024-11-05 03:25:18.921088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.483 [2024-11-05 03:25:18.921136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.483 [2024-11-05 03:25:18.921264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:05.483 [2024-11-05 03:25:18.921290] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:05.483 [2024-11-05 03:25:18.921320] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:05.483 [2024-11-05 03:25:18.921345] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:05.483 BaseBdev1 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.483 03:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.420 "name": "raid_bdev1", 00:14:06.420 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:06.420 "strip_size_kb": 0, 00:14:06.420 "state": "online", 00:14:06.420 "raid_level": "raid1", 00:14:06.420 "superblock": true, 00:14:06.420 "num_base_bdevs": 2, 00:14:06.420 "num_base_bdevs_discovered": 1, 00:14:06.420 "num_base_bdevs_operational": 1, 00:14:06.420 "base_bdevs_list": [ 00:14:06.420 { 00:14:06.420 "name": null, 00:14:06.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.420 "is_configured": false, 00:14:06.420 "data_offset": 0, 00:14:06.420 "data_size": 63488 00:14:06.420 }, 00:14:06.420 { 00:14:06.420 "name": "BaseBdev2", 00:14:06.420 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:06.420 "is_configured": true, 00:14:06.420 "data_offset": 2048, 00:14:06.420 "data_size": 63488 00:14:06.420 } 00:14:06.420 ] 00:14:06.420 }' 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.420 03:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.987 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.987 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.987 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.987 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.987 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.988 "name": "raid_bdev1", 00:14:06.988 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:06.988 "strip_size_kb": 0, 00:14:06.988 "state": "online", 00:14:06.988 "raid_level": "raid1", 00:14:06.988 "superblock": true, 00:14:06.988 "num_base_bdevs": 2, 00:14:06.988 "num_base_bdevs_discovered": 1, 00:14:06.988 "num_base_bdevs_operational": 1, 00:14:06.988 "base_bdevs_list": [ 00:14:06.988 { 00:14:06.988 "name": null, 00:14:06.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.988 "is_configured": false, 00:14:06.988 "data_offset": 0, 00:14:06.988 "data_size": 63488 00:14:06.988 }, 00:14:06.988 { 00:14:06.988 "name": "BaseBdev2", 00:14:06.988 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:06.988 "is_configured": true, 00:14:06.988 "data_offset": 2048, 00:14:06.988 "data_size": 63488 00:14:06.988 } 00:14:06.988 ] 00:14:06.988 }' 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:06.988 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.246 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.246 [2024-11-05 03:25:20.629083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.247 [2024-11-05 03:25:20.629385] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.247 [2024-11-05 03:25:20.629417] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.247 request: 00:14:07.247 { 00:14:07.247 "base_bdev": "BaseBdev1", 00:14:07.247 "raid_bdev": "raid_bdev1", 00:14:07.247 "method": "bdev_raid_add_base_bdev", 00:14:07.247 "req_id": 1 00:14:07.247 } 00:14:07.247 Got JSON-RPC error response 00:14:07.247 response: 00:14:07.247 { 00:14:07.247 "code": -22, 00:14:07.247 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:07.247 } 00:14:07.247 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:07.247 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:07.247 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.247 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.247 03:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.247 03:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.183 "name": "raid_bdev1", 00:14:08.183 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:08.183 "strip_size_kb": 0, 00:14:08.183 "state": "online", 00:14:08.183 "raid_level": "raid1", 00:14:08.183 "superblock": true, 00:14:08.183 "num_base_bdevs": 2, 00:14:08.183 "num_base_bdevs_discovered": 1, 00:14:08.183 "num_base_bdevs_operational": 1, 00:14:08.183 "base_bdevs_list": [ 00:14:08.183 { 00:14:08.183 "name": null, 00:14:08.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.183 "is_configured": false, 00:14:08.183 "data_offset": 0, 00:14:08.183 "data_size": 63488 00:14:08.183 }, 00:14:08.183 { 00:14:08.183 "name": "BaseBdev2", 00:14:08.183 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:08.183 "is_configured": true, 00:14:08.183 "data_offset": 2048, 00:14:08.183 "data_size": 63488 00:14:08.183 } 00:14:08.183 ] 00:14:08.183 }' 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.183 03:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.751 "name": "raid_bdev1", 00:14:08.751 "uuid": "9a93d087-410e-4f33-902c-976b1dd40224", 00:14:08.751 "strip_size_kb": 0, 00:14:08.751 "state": "online", 00:14:08.751 "raid_level": "raid1", 00:14:08.751 "superblock": true, 00:14:08.751 "num_base_bdevs": 2, 00:14:08.751 "num_base_bdevs_discovered": 1, 00:14:08.751 "num_base_bdevs_operational": 1, 00:14:08.751 "base_bdevs_list": [ 00:14:08.751 { 00:14:08.751 "name": null, 00:14:08.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.751 "is_configured": false, 00:14:08.751 "data_offset": 0, 00:14:08.751 "data_size": 63488 00:14:08.751 }, 00:14:08.751 { 00:14:08.751 "name": "BaseBdev2", 00:14:08.751 "uuid": "9ca658c1-8886-5696-9201-230a6b1b4844", 00:14:08.751 "is_configured": true, 00:14:08.751 "data_offset": 2048, 00:14:08.751 "data_size": 63488 00:14:08.751 } 00:14:08.751 ] 00:14:08.751 }' 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75662 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75662 ']' 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75662 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75662 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75662' 00:14:08.751 killing process with pid 75662 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75662 00:14:08.751 Received shutdown signal, test time was about 60.000000 seconds 00:14:08.751 00:14:08.751 Latency(us) 00:14:08.751 [2024-11-05T03:25:22.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.751 [2024-11-05T03:25:22.390Z] =================================================================================================================== 00:14:08.751 [2024-11-05T03:25:22.390Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.751 [2024-11-05 03:25:22.379249] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.751 03:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75662 00:14:08.751 [2024-11-05 03:25:22.379419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.751 [2024-11-05 03:25:22.379486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.751 [2024-11-05 03:25:22.379512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:09.010 [2024-11-05 03:25:22.629163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.947 03:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:09.947 00:14:09.947 real 0m27.038s 00:14:09.947 user 0m33.532s 00:14:09.947 sys 0m4.245s 00:14:09.947 03:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.947 03:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.947 ************************************ 00:14:09.947 END TEST raid_rebuild_test_sb 00:14:09.947 ************************************ 00:14:10.209 03:25:23 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:10.209 03:25:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:10.209 03:25:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.209 03:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.209 ************************************ 00:14:10.209 START TEST raid_rebuild_test_io 00:14:10.209 ************************************ 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76431 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76431 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76431 ']' 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:10.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:10.209 03:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.209 [2024-11-05 03:25:23.718614] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:10.209 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.209 Zero copy mechanism will not be used. 00:14:10.209 [2024-11-05 03:25:23.718834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76431 ] 00:14:10.468 [2024-11-05 03:25:23.902675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.468 [2024-11-05 03:25:24.024873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.726 [2024-11-05 03:25:24.204821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.726 [2024-11-05 03:25:24.204898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 BaseBdev1_malloc 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 [2024-11-05 03:25:24.746089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.294 [2024-11-05 03:25:24.746198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.294 [2024-11-05 03:25:24.746228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.294 [2024-11-05 03:25:24.746245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.294 [2024-11-05 03:25:24.749132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.294 [2024-11-05 03:25:24.749209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.294 BaseBdev1 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 BaseBdev2_malloc 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 [2024-11-05 03:25:24.800260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.294 [2024-11-05 03:25:24.800394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.294 [2024-11-05 03:25:24.800422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.294 [2024-11-05 03:25:24.800441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.294 [2024-11-05 03:25:24.803176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.294 [2024-11-05 03:25:24.803259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.294 BaseBdev2 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 spare_malloc 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 spare_delay 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 [2024-11-05 03:25:24.874605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.294 [2024-11-05 03:25:24.874725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.294 [2024-11-05 03:25:24.874753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:11.294 [2024-11-05 03:25:24.874770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.294 [2024-11-05 03:25:24.877389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.294 [2024-11-05 03:25:24.877467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.294 spare 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 [2024-11-05 03:25:24.882664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.294 [2024-11-05 03:25:24.884970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.294 [2024-11-05 03:25:24.885102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:11.294 [2024-11-05 03:25:24.885122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:11.294 [2024-11-05 03:25:24.885517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:11.294 [2024-11-05 03:25:24.885764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:11.294 [2024-11-05 03:25:24.885812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:11.294 [2024-11-05 03:25:24.886027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.294 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.295 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.553 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.553 "name": "raid_bdev1", 00:14:11.553 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:11.553 "strip_size_kb": 0, 00:14:11.553 "state": "online", 00:14:11.553 "raid_level": "raid1", 00:14:11.553 "superblock": false, 00:14:11.553 "num_base_bdevs": 2, 00:14:11.553 "num_base_bdevs_discovered": 2, 00:14:11.553 "num_base_bdevs_operational": 2, 00:14:11.553 "base_bdevs_list": [ 00:14:11.553 { 00:14:11.553 "name": "BaseBdev1", 00:14:11.553 "uuid": "89c020f2-57ed-526c-8a6a-a81d3581eb4e", 00:14:11.553 "is_configured": true, 00:14:11.553 "data_offset": 0, 00:14:11.553 "data_size": 65536 00:14:11.553 }, 00:14:11.553 { 00:14:11.553 "name": "BaseBdev2", 00:14:11.553 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:11.553 "is_configured": true, 00:14:11.553 "data_offset": 0, 00:14:11.553 "data_size": 65536 00:14:11.553 } 00:14:11.553 ] 00:14:11.553 }' 00:14:11.553 03:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.553 03:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.812 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.812 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:11.812 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.812 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.812 [2024-11-05 03:25:25.419164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.812 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.072 [2024-11-05 03:25:25.522853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.072 "name": "raid_bdev1", 00:14:12.072 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:12.072 "strip_size_kb": 0, 00:14:12.072 "state": "online", 00:14:12.072 "raid_level": "raid1", 00:14:12.072 "superblock": false, 00:14:12.072 "num_base_bdevs": 2, 00:14:12.072 "num_base_bdevs_discovered": 1, 00:14:12.072 "num_base_bdevs_operational": 1, 00:14:12.072 "base_bdevs_list": [ 00:14:12.072 { 00:14:12.072 "name": null, 00:14:12.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.072 "is_configured": false, 00:14:12.072 "data_offset": 0, 00:14:12.072 "data_size": 65536 00:14:12.072 }, 00:14:12.072 { 00:14:12.072 "name": "BaseBdev2", 00:14:12.072 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:12.072 "is_configured": true, 00:14:12.072 "data_offset": 0, 00:14:12.072 "data_size": 65536 00:14:12.072 } 00:14:12.072 ] 00:14:12.072 }' 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.072 03:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.072 [2024-11-05 03:25:25.655559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:12.072 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.072 Zero copy mechanism will not be used. 00:14:12.072 Running I/O for 60 seconds... 00:14:12.640 03:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:12.640 03:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.640 03:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.640 [2024-11-05 03:25:26.071787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.640 03:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.640 03:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:12.640 [2024-11-05 03:25:26.127337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:12.640 [2024-11-05 03:25:26.129899] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.640 [2024-11-05 03:25:26.238364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:12.640 [2024-11-05 03:25:26.239046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:12.899 [2024-11-05 03:25:26.448545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:12.899 [2024-11-05 03:25:26.448958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.417 187.00 IOPS, 561.00 MiB/s [2024-11-05T03:25:27.056Z] [2024-11-05 03:25:26.797726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:13.417 [2024-11-05 03:25:26.922228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:13.417 [2024-11-05 03:25:26.922649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.676 "name": "raid_bdev1", 00:14:13.676 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:13.676 "strip_size_kb": 0, 00:14:13.676 "state": "online", 00:14:13.676 "raid_level": "raid1", 00:14:13.676 "superblock": false, 00:14:13.676 "num_base_bdevs": 2, 00:14:13.676 "num_base_bdevs_discovered": 2, 00:14:13.676 "num_base_bdevs_operational": 2, 00:14:13.676 "process": { 00:14:13.676 "type": "rebuild", 00:14:13.676 "target": "spare", 00:14:13.676 "progress": { 00:14:13.676 "blocks": 10240, 00:14:13.676 "percent": 15 00:14:13.676 } 00:14:13.676 }, 00:14:13.676 "base_bdevs_list": [ 00:14:13.676 { 00:14:13.676 "name": "spare", 00:14:13.676 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:13.676 "is_configured": true, 00:14:13.676 "data_offset": 0, 00:14:13.676 "data_size": 65536 00:14:13.676 }, 00:14:13.676 { 00:14:13.676 "name": "BaseBdev2", 00:14:13.676 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:13.676 "is_configured": true, 00:14:13.676 "data_offset": 0, 00:14:13.676 "data_size": 65536 00:14:13.676 } 00:14:13.676 ] 00:14:13.676 }' 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.676 [2024-11-05 03:25:27.235931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:13.676 [2024-11-05 03:25:27.236846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.676 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.676 [2024-11-05 03:25:27.293211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.935 [2024-11-05 03:25:27.352788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:13.935 [2024-11-05 03:25:27.353442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:13.935 [2024-11-05 03:25:27.461291] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:13.936 [2024-11-05 03:25:27.470462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.936 [2024-11-05 03:25:27.470504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.936 [2024-11-05 03:25:27.470518] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:13.936 [2024-11-05 03:25:27.519568] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.936 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.195 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.195 "name": "raid_bdev1", 00:14:14.195 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:14.195 "strip_size_kb": 0, 00:14:14.195 "state": "online", 00:14:14.195 "raid_level": "raid1", 00:14:14.195 "superblock": false, 00:14:14.195 "num_base_bdevs": 2, 00:14:14.195 "num_base_bdevs_discovered": 1, 00:14:14.195 "num_base_bdevs_operational": 1, 00:14:14.195 "base_bdevs_list": [ 00:14:14.195 { 00:14:14.195 "name": null, 00:14:14.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.195 "is_configured": false, 00:14:14.195 "data_offset": 0, 00:14:14.195 "data_size": 65536 00:14:14.195 }, 00:14:14.195 { 00:14:14.195 "name": "BaseBdev2", 00:14:14.195 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:14.195 "is_configured": true, 00:14:14.195 "data_offset": 0, 00:14:14.195 "data_size": 65536 00:14:14.195 } 00:14:14.195 ] 00:14:14.195 }' 00:14:14.195 03:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.195 03:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.454 135.00 IOPS, 405.00 MiB/s [2024-11-05T03:25:28.093Z] 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.455 03:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.714 "name": "raid_bdev1", 00:14:14.714 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:14.714 "strip_size_kb": 0, 00:14:14.714 "state": "online", 00:14:14.714 "raid_level": "raid1", 00:14:14.714 "superblock": false, 00:14:14.714 "num_base_bdevs": 2, 00:14:14.714 "num_base_bdevs_discovered": 1, 00:14:14.714 "num_base_bdevs_operational": 1, 00:14:14.714 "base_bdevs_list": [ 00:14:14.714 { 00:14:14.714 "name": null, 00:14:14.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.714 "is_configured": false, 00:14:14.714 "data_offset": 0, 00:14:14.714 "data_size": 65536 00:14:14.714 }, 00:14:14.714 { 00:14:14.714 "name": "BaseBdev2", 00:14:14.714 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:14.714 "is_configured": true, 00:14:14.714 "data_offset": 0, 00:14:14.714 "data_size": 65536 00:14:14.714 } 00:14:14.714 ] 00:14:14.714 }' 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.714 [2024-11-05 03:25:28.248355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.714 03:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:14.714 [2024-11-05 03:25:28.310308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:14.714 [2024-11-05 03:25:28.312824] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.973 [2024-11-05 03:25:28.436932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.973 [2024-11-05 03:25:28.437564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.232 156.00 IOPS, 468.00 MiB/s [2024-11-05T03:25:28.871Z] [2024-11-05 03:25:28.670526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.232 [2024-11-05 03:25:28.670865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.490 [2024-11-05 03:25:28.947472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:15.490 [2024-11-05 03:25:28.948188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:15.749 [2024-11-05 03:25:29.153377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:15.749 [2024-11-05 03:25:29.153770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.749 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.749 "name": "raid_bdev1", 00:14:15.749 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:15.749 "strip_size_kb": 0, 00:14:15.749 "state": "online", 00:14:15.749 "raid_level": "raid1", 00:14:15.749 "superblock": false, 00:14:15.749 "num_base_bdevs": 2, 00:14:15.749 "num_base_bdevs_discovered": 2, 00:14:15.749 "num_base_bdevs_operational": 2, 00:14:15.749 "process": { 00:14:15.749 "type": "rebuild", 00:14:15.749 "target": "spare", 00:14:15.749 "progress": { 00:14:15.749 "blocks": 10240, 00:14:15.749 "percent": 15 00:14:15.749 } 00:14:15.749 }, 00:14:15.749 "base_bdevs_list": [ 00:14:15.749 { 00:14:15.749 "name": "spare", 00:14:15.749 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:15.749 "is_configured": true, 00:14:15.749 "data_offset": 0, 00:14:15.749 "data_size": 65536 00:14:15.749 }, 00:14:15.749 { 00:14:15.749 "name": "BaseBdev2", 00:14:15.750 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:15.750 "is_configured": true, 00:14:15.750 "data_offset": 0, 00:14:15.750 "data_size": 65536 00:14:15.750 } 00:14:15.750 ] 00:14:15.750 }' 00:14:15.750 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.008 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.009 [2024-11-05 03:25:29.471877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.009 "name": "raid_bdev1", 00:14:16.009 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:16.009 "strip_size_kb": 0, 00:14:16.009 "state": "online", 00:14:16.009 "raid_level": "raid1", 00:14:16.009 "superblock": false, 00:14:16.009 "num_base_bdevs": 2, 00:14:16.009 "num_base_bdevs_discovered": 2, 00:14:16.009 "num_base_bdevs_operational": 2, 00:14:16.009 "process": { 00:14:16.009 "type": "rebuild", 00:14:16.009 "target": "spare", 00:14:16.009 "progress": { 00:14:16.009 "blocks": 12288, 00:14:16.009 "percent": 18 00:14:16.009 } 00:14:16.009 }, 00:14:16.009 "base_bdevs_list": [ 00:14:16.009 { 00:14:16.009 "name": "spare", 00:14:16.009 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:16.009 "is_configured": true, 00:14:16.009 "data_offset": 0, 00:14:16.009 "data_size": 65536 00:14:16.009 }, 00:14:16.009 { 00:14:16.009 "name": "BaseBdev2", 00:14:16.009 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:16.009 "is_configured": true, 00:14:16.009 "data_offset": 0, 00:14:16.009 "data_size": 65536 00:14:16.009 } 00:14:16.009 ] 00:14:16.009 }' 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.009 [2024-11-05 03:25:29.592488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.009 03:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.268 141.00 IOPS, 423.00 MiB/s [2024-11-05T03:25:29.907Z] [2024-11-05 03:25:29.832157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:16.526 [2024-11-05 03:25:30.050118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:16.784 [2024-11-05 03:25:30.371902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:17.042 [2024-11-05 03:25:30.492204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.043 03:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.043 124.40 IOPS, 373.20 MiB/s [2024-11-05T03:25:30.682Z] 03:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.302 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.302 "name": "raid_bdev1", 00:14:17.302 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:17.302 "strip_size_kb": 0, 00:14:17.302 "state": "online", 00:14:17.302 "raid_level": "raid1", 00:14:17.302 "superblock": false, 00:14:17.302 "num_base_bdevs": 2, 00:14:17.302 "num_base_bdevs_discovered": 2, 00:14:17.302 "num_base_bdevs_operational": 2, 00:14:17.302 "process": { 00:14:17.302 "type": "rebuild", 00:14:17.302 "target": "spare", 00:14:17.302 "progress": { 00:14:17.302 "blocks": 30720, 00:14:17.302 "percent": 46 00:14:17.302 } 00:14:17.302 }, 00:14:17.302 "base_bdevs_list": [ 00:14:17.302 { 00:14:17.302 "name": "spare", 00:14:17.302 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:17.302 "is_configured": true, 00:14:17.302 "data_offset": 0, 00:14:17.302 "data_size": 65536 00:14:17.302 }, 00:14:17.302 { 00:14:17.302 "name": "BaseBdev2", 00:14:17.302 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:17.302 "is_configured": true, 00:14:17.302 "data_offset": 0, 00:14:17.302 "data_size": 65536 00:14:17.302 } 00:14:17.302 ] 00:14:17.302 }' 00:14:17.302 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.302 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.302 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.302 [2024-11-05 03:25:30.771826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:17.302 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.302 03:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.562 [2024-11-05 03:25:30.980128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:18.129 [2024-11-05 03:25:31.581004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:18.388 112.00 IOPS, 336.00 MiB/s [2024-11-05T03:25:32.027Z] 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.388 "name": "raid_bdev1", 00:14:18.388 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:18.388 "strip_size_kb": 0, 00:14:18.388 "state": "online", 00:14:18.388 "raid_level": "raid1", 00:14:18.388 "superblock": false, 00:14:18.388 "num_base_bdevs": 2, 00:14:18.388 "num_base_bdevs_discovered": 2, 00:14:18.388 "num_base_bdevs_operational": 2, 00:14:18.388 "process": { 00:14:18.388 "type": "rebuild", 00:14:18.388 "target": "spare", 00:14:18.388 "progress": { 00:14:18.388 "blocks": 47104, 00:14:18.388 "percent": 71 00:14:18.388 } 00:14:18.388 }, 00:14:18.388 "base_bdevs_list": [ 00:14:18.388 { 00:14:18.388 "name": "spare", 00:14:18.388 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:18.388 "is_configured": true, 00:14:18.388 "data_offset": 0, 00:14:18.388 "data_size": 65536 00:14:18.388 }, 00:14:18.388 { 00:14:18.388 "name": "BaseBdev2", 00:14:18.388 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:18.388 "is_configured": true, 00:14:18.388 "data_offset": 0, 00:14:18.388 "data_size": 65536 00:14:18.388 } 00:14:18.388 ] 00:14:18.388 }' 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.388 03:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.955 [2024-11-05 03:25:32.298012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:18.955 [2024-11-05 03:25:32.507906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:19.485 100.86 IOPS, 302.57 MiB/s [2024-11-05T03:25:33.124Z] [2024-11-05 03:25:32.953870] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.485 03:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.485 03:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.485 "name": "raid_bdev1", 00:14:19.485 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:19.485 "strip_size_kb": 0, 00:14:19.485 "state": "online", 00:14:19.485 "raid_level": "raid1", 00:14:19.485 "superblock": false, 00:14:19.485 "num_base_bdevs": 2, 00:14:19.485 "num_base_bdevs_discovered": 2, 00:14:19.485 "num_base_bdevs_operational": 2, 00:14:19.485 "process": { 00:14:19.485 "type": "rebuild", 00:14:19.485 "target": "spare", 00:14:19.485 "progress": { 00:14:19.485 "blocks": 65536, 00:14:19.485 "percent": 100 00:14:19.485 } 00:14:19.485 }, 00:14:19.485 "base_bdevs_list": [ 00:14:19.485 { 00:14:19.485 "name": "spare", 00:14:19.485 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:19.485 "is_configured": true, 00:14:19.485 "data_offset": 0, 00:14:19.485 "data_size": 65536 00:14:19.485 }, 00:14:19.485 { 00:14:19.485 "name": "BaseBdev2", 00:14:19.485 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:19.485 "is_configured": true, 00:14:19.485 "data_offset": 0, 00:14:19.485 "data_size": 65536 00:14:19.485 } 00:14:19.485 ] 00:14:19.485 }' 00:14:19.485 03:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.485 [2024-11-05 03:25:33.060561] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:19.485 [2024-11-05 03:25:33.063704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.485 03:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.485 03:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.756 03:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.756 03:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.582 91.25 IOPS, 273.75 MiB/s [2024-11-05T03:25:34.221Z] 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.582 "name": "raid_bdev1", 00:14:20.582 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:20.582 "strip_size_kb": 0, 00:14:20.582 "state": "online", 00:14:20.582 "raid_level": "raid1", 00:14:20.582 "superblock": false, 00:14:20.582 "num_base_bdevs": 2, 00:14:20.582 "num_base_bdevs_discovered": 2, 00:14:20.582 "num_base_bdevs_operational": 2, 00:14:20.582 "base_bdevs_list": [ 00:14:20.582 { 00:14:20.582 "name": "spare", 00:14:20.582 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:20.582 "is_configured": true, 00:14:20.582 "data_offset": 0, 00:14:20.582 "data_size": 65536 00:14:20.582 }, 00:14:20.582 { 00:14:20.582 "name": "BaseBdev2", 00:14:20.582 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:20.582 "is_configured": true, 00:14:20.582 "data_offset": 0, 00:14:20.582 "data_size": 65536 00:14:20.582 } 00:14:20.582 ] 00:14:20.582 }' 00:14:20.582 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.841 "name": "raid_bdev1", 00:14:20.841 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:20.841 "strip_size_kb": 0, 00:14:20.841 "state": "online", 00:14:20.841 "raid_level": "raid1", 00:14:20.841 "superblock": false, 00:14:20.841 "num_base_bdevs": 2, 00:14:20.841 "num_base_bdevs_discovered": 2, 00:14:20.841 "num_base_bdevs_operational": 2, 00:14:20.841 "base_bdevs_list": [ 00:14:20.841 { 00:14:20.841 "name": "spare", 00:14:20.841 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:20.841 "is_configured": true, 00:14:20.841 "data_offset": 0, 00:14:20.841 "data_size": 65536 00:14:20.841 }, 00:14:20.841 { 00:14:20.841 "name": "BaseBdev2", 00:14:20.841 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:20.841 "is_configured": true, 00:14:20.841 "data_offset": 0, 00:14:20.841 "data_size": 65536 00:14:20.841 } 00:14:20.841 ] 00:14:20.841 }' 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.841 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.100 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.100 "name": "raid_bdev1", 00:14:21.100 "uuid": "3326b461-ff49-46cb-b2d4-61c31b2f08b8", 00:14:21.100 "strip_size_kb": 0, 00:14:21.100 "state": "online", 00:14:21.100 "raid_level": "raid1", 00:14:21.100 "superblock": false, 00:14:21.100 "num_base_bdevs": 2, 00:14:21.100 "num_base_bdevs_discovered": 2, 00:14:21.100 "num_base_bdevs_operational": 2, 00:14:21.100 "base_bdevs_list": [ 00:14:21.100 { 00:14:21.100 "name": "spare", 00:14:21.100 "uuid": "ef20dfc5-8743-50ff-a48e-797e633520ad", 00:14:21.100 "is_configured": true, 00:14:21.100 "data_offset": 0, 00:14:21.100 "data_size": 65536 00:14:21.100 }, 00:14:21.100 { 00:14:21.100 "name": "BaseBdev2", 00:14:21.100 "uuid": "0a01d0b2-03a0-5ca8-b09b-fcb12b6d9eff", 00:14:21.100 "is_configured": true, 00:14:21.100 "data_offset": 0, 00:14:21.100 "data_size": 65536 00:14:21.100 } 00:14:21.100 ] 00:14:21.100 }' 00:14:21.100 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.100 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.359 85.44 IOPS, 256.33 MiB/s [2024-11-05T03:25:34.998Z] 03:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.359 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.359 03:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.359 [2024-11-05 03:25:34.987168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.359 [2024-11-05 03:25:34.987201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.618 00:14:21.619 Latency(us) 00:14:21.619 [2024-11-05T03:25:35.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.619 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:21.619 raid_bdev1 : 9.37 83.65 250.95 0.00 0.00 16560.65 253.21 111053.73 00:14:21.619 [2024-11-05T03:25:35.258Z] =================================================================================================================== 00:14:21.619 [2024-11-05T03:25:35.258Z] Total : 83.65 250.95 0.00 0.00 16560.65 253.21 111053.73 00:14:21.619 [2024-11-05 03:25:35.051299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.619 [2024-11-05 03:25:35.051401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.619 [2024-11-05 03:25:35.051546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.619 [2024-11-05 03:25:35.051563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:21.619 { 00:14:21.619 "results": [ 00:14:21.619 { 00:14:21.619 "job": "raid_bdev1", 00:14:21.619 "core_mask": "0x1", 00:14:21.619 "workload": "randrw", 00:14:21.619 "percentage": 50, 00:14:21.619 "status": "finished", 00:14:21.619 "queue_depth": 2, 00:14:21.619 "io_size": 3145728, 00:14:21.619 "runtime": 9.372443, 00:14:21.619 "iops": 83.64948178399165, 00:14:21.619 "mibps": 250.94844535197495, 00:14:21.619 "io_failed": 0, 00:14:21.619 "io_timeout": 0, 00:14:21.619 "avg_latency_us": 16560.645936920224, 00:14:21.619 "min_latency_us": 253.20727272727274, 00:14:21.619 "max_latency_us": 111053.73090909091 00:14:21.619 } 00:14:21.619 ], 00:14:21.619 "core_count": 1 00:14:21.619 } 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.619 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:21.878 /dev/nbd0 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.878 1+0 records in 00:14:21.878 1+0 records out 00:14:21.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336386 s, 12.2 MB/s 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.878 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:22.137 /dev/nbd1 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.137 1+0 records in 00:14:22.137 1+0 records out 00:14:22.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366359 s, 11.2 MB/s 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.137 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:22.396 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:22.396 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.396 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:22.396 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.396 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.396 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.396 03:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.655 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76431 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76431 ']' 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76431 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76431 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:23.223 killing process with pid 76431 00:14:23.223 Received shutdown signal, test time was about 10.983851 seconds 00:14:23.223 00:14:23.223 Latency(us) 00:14:23.223 [2024-11-05T03:25:36.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.223 [2024-11-05T03:25:36.862Z] =================================================================================================================== 00:14:23.223 [2024-11-05T03:25:36.862Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76431' 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76431 00:14:23.223 [2024-11-05 03:25:36.642504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.223 03:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76431 00:14:23.223 [2024-11-05 03:25:36.827366] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:24.602 00:14:24.602 real 0m14.216s 00:14:24.602 user 0m18.475s 00:14:24.602 sys 0m1.537s 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.602 ************************************ 00:14:24.602 END TEST raid_rebuild_test_io 00:14:24.602 ************************************ 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.602 03:25:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:24.602 03:25:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:24.602 03:25:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:24.602 03:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.602 ************************************ 00:14:24.602 START TEST raid_rebuild_test_sb_io 00:14:24.602 ************************************ 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76832 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76832 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 76832 ']' 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:24.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:24.602 03:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.602 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:24.602 Zero copy mechanism will not be used. 00:14:24.602 [2024-11-05 03:25:37.997122] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:24.602 [2024-11-05 03:25:37.997333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76832 ] 00:14:24.602 [2024-11-05 03:25:38.182773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.861 [2024-11-05 03:25:38.311960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.861 [2024-11-05 03:25:38.497588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.861 [2024-11-05 03:25:38.497671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 BaseBdev1_malloc 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 [2024-11-05 03:25:38.954063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:25.428 [2024-11-05 03:25:38.954162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.428 [2024-11-05 03:25:38.954193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:25.428 [2024-11-05 03:25:38.954210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.428 [2024-11-05 03:25:38.957190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.428 [2024-11-05 03:25:38.957250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.428 BaseBdev1 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 BaseBdev2_malloc 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.428 03:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 [2024-11-05 03:25:38.999820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:25.428 [2024-11-05 03:25:38.999896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.428 [2024-11-05 03:25:38.999920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:25.428 [2024-11-05 03:25:38.999937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.428 [2024-11-05 03:25:39.002861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.428 [2024-11-05 03:25:39.002914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:25.428 BaseBdev2 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 spare_malloc 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.428 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 spare_delay 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.687 [2024-11-05 03:25:39.072384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.687 [2024-11-05 03:25:39.072473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.687 [2024-11-05 03:25:39.072500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:25.687 [2024-11-05 03:25:39.072516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.687 [2024-11-05 03:25:39.075210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.687 [2024-11-05 03:25:39.075268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.687 spare 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.687 [2024-11-05 03:25:39.084457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.687 [2024-11-05 03:25:39.086879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.687 [2024-11-05 03:25:39.087095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:25.687 [2024-11-05 03:25:39.087118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.687 [2024-11-05 03:25:39.087499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:25.687 [2024-11-05 03:25:39.087740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:25.687 [2024-11-05 03:25:39.087766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:25.687 [2024-11-05 03:25:39.087940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.687 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.687 "name": "raid_bdev1", 00:14:25.687 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:25.687 "strip_size_kb": 0, 00:14:25.687 "state": "online", 00:14:25.687 "raid_level": "raid1", 00:14:25.687 "superblock": true, 00:14:25.687 "num_base_bdevs": 2, 00:14:25.687 "num_base_bdevs_discovered": 2, 00:14:25.687 "num_base_bdevs_operational": 2, 00:14:25.687 "base_bdevs_list": [ 00:14:25.687 { 00:14:25.688 "name": "BaseBdev1", 00:14:25.688 "uuid": "b690be71-e72e-5ad0-965d-3be5db4c67d9", 00:14:25.688 "is_configured": true, 00:14:25.688 "data_offset": 2048, 00:14:25.688 "data_size": 63488 00:14:25.688 }, 00:14:25.688 { 00:14:25.688 "name": "BaseBdev2", 00:14:25.688 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:25.688 "is_configured": true, 00:14:25.688 "data_offset": 2048, 00:14:25.688 "data_size": 63488 00:14:25.688 } 00:14:25.688 ] 00:14:25.688 }' 00:14:25.688 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.688 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.255 [2024-11-05 03:25:39.604978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.255 [2024-11-05 03:25:39.708661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.255 "name": "raid_bdev1", 00:14:26.255 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:26.255 "strip_size_kb": 0, 00:14:26.255 "state": "online", 00:14:26.255 "raid_level": "raid1", 00:14:26.255 "superblock": true, 00:14:26.255 "num_base_bdevs": 2, 00:14:26.255 "num_base_bdevs_discovered": 1, 00:14:26.255 "num_base_bdevs_operational": 1, 00:14:26.255 "base_bdevs_list": [ 00:14:26.255 { 00:14:26.255 "name": null, 00:14:26.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.255 "is_configured": false, 00:14:26.255 "data_offset": 0, 00:14:26.255 "data_size": 63488 00:14:26.255 }, 00:14:26.255 { 00:14:26.255 "name": "BaseBdev2", 00:14:26.255 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:26.255 "is_configured": true, 00:14:26.255 "data_offset": 2048, 00:14:26.255 "data_size": 63488 00:14:26.255 } 00:14:26.255 ] 00:14:26.255 }' 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.255 03:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.255 [2024-11-05 03:25:39.836733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:26.255 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:26.255 Zero copy mechanism will not be used. 00:14:26.255 Running I/O for 60 seconds... 00:14:26.822 03:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.822 03:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.822 03:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.822 [2024-11-05 03:25:40.244529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.822 03:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.822 03:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.822 [2024-11-05 03:25:40.311694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:26.822 [2024-11-05 03:25:40.314170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.822 [2024-11-05 03:25:40.435516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:26.822 [2024-11-05 03:25:40.436275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:27.081 [2024-11-05 03:25:40.657038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:27.081 [2024-11-05 03:25:40.657527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:27.340 159.00 IOPS, 477.00 MiB/s [2024-11-05T03:25:40.979Z] [2024-11-05 03:25:40.920360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.907 "name": "raid_bdev1", 00:14:27.907 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:27.907 "strip_size_kb": 0, 00:14:27.907 "state": "online", 00:14:27.907 "raid_level": "raid1", 00:14:27.907 "superblock": true, 00:14:27.907 "num_base_bdevs": 2, 00:14:27.907 "num_base_bdevs_discovered": 2, 00:14:27.907 "num_base_bdevs_operational": 2, 00:14:27.907 "process": { 00:14:27.907 "type": "rebuild", 00:14:27.907 "target": "spare", 00:14:27.907 "progress": { 00:14:27.907 "blocks": 12288, 00:14:27.907 "percent": 19 00:14:27.907 } 00:14:27.907 }, 00:14:27.907 "base_bdevs_list": [ 00:14:27.907 { 00:14:27.907 "name": "spare", 00:14:27.907 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:27.907 "is_configured": true, 00:14:27.907 "data_offset": 2048, 00:14:27.907 "data_size": 63488 00:14:27.907 }, 00:14:27.907 { 00:14:27.907 "name": "BaseBdev2", 00:14:27.907 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:27.907 "is_configured": true, 00:14:27.907 "data_offset": 2048, 00:14:27.907 "data_size": 63488 00:14:27.907 } 00:14:27.907 ] 00:14:27.907 }' 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.907 [2024-11-05 03:25:41.410743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.907 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.907 [2024-11-05 03:25:41.459141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.166 [2024-11-05 03:25:41.628826] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.166 [2024-11-05 03:25:41.646638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.166 [2024-11-05 03:25:41.646697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.166 [2024-11-05 03:25:41.646716] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.166 [2024-11-05 03:25:41.674510] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.166 "name": "raid_bdev1", 00:14:28.166 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:28.166 "strip_size_kb": 0, 00:14:28.166 "state": "online", 00:14:28.166 "raid_level": "raid1", 00:14:28.166 "superblock": true, 00:14:28.166 "num_base_bdevs": 2, 00:14:28.166 "num_base_bdevs_discovered": 1, 00:14:28.166 "num_base_bdevs_operational": 1, 00:14:28.166 "base_bdevs_list": [ 00:14:28.166 { 00:14:28.166 "name": null, 00:14:28.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.166 "is_configured": false, 00:14:28.166 "data_offset": 0, 00:14:28.166 "data_size": 63488 00:14:28.166 }, 00:14:28.166 { 00:14:28.166 "name": "BaseBdev2", 00:14:28.166 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:28.166 "is_configured": true, 00:14:28.166 "data_offset": 2048, 00:14:28.166 "data_size": 63488 00:14:28.166 } 00:14:28.166 ] 00:14:28.166 }' 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.166 03:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.684 144.00 IOPS, 432.00 MiB/s [2024-11-05T03:25:42.323Z] 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.684 "name": "raid_bdev1", 00:14:28.684 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:28.684 "strip_size_kb": 0, 00:14:28.684 "state": "online", 00:14:28.684 "raid_level": "raid1", 00:14:28.684 "superblock": true, 00:14:28.684 "num_base_bdevs": 2, 00:14:28.684 "num_base_bdevs_discovered": 1, 00:14:28.684 "num_base_bdevs_operational": 1, 00:14:28.684 "base_bdevs_list": [ 00:14:28.684 { 00:14:28.684 "name": null, 00:14:28.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.684 "is_configured": false, 00:14:28.684 "data_offset": 0, 00:14:28.684 "data_size": 63488 00:14:28.684 }, 00:14:28.684 { 00:14:28.684 "name": "BaseBdev2", 00:14:28.684 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:28.684 "is_configured": true, 00:14:28.684 "data_offset": 2048, 00:14:28.684 "data_size": 63488 00:14:28.684 } 00:14:28.684 ] 00:14:28.684 }' 00:14:28.684 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 [2024-11-05 03:25:42.405025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.942 03:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.942 [2024-11-05 03:25:42.456457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:28.942 [2024-11-05 03:25:42.459031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.942 [2024-11-05 03:25:42.567856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:28.942 [2024-11-05 03:25:42.568676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:29.201 [2024-11-05 03:25:42.793786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:29.201 [2024-11-05 03:25:42.794192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:29.718 158.67 IOPS, 476.00 MiB/s [2024-11-05T03:25:43.357Z] [2024-11-05 03:25:43.162649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:29.977 [2024-11-05 03:25:43.379382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:29.977 [2024-11-05 03:25:43.379814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.977 "name": "raid_bdev1", 00:14:29.977 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:29.977 "strip_size_kb": 0, 00:14:29.977 "state": "online", 00:14:29.977 "raid_level": "raid1", 00:14:29.977 "superblock": true, 00:14:29.977 "num_base_bdevs": 2, 00:14:29.977 "num_base_bdevs_discovered": 2, 00:14:29.977 "num_base_bdevs_operational": 2, 00:14:29.977 "process": { 00:14:29.977 "type": "rebuild", 00:14:29.977 "target": "spare", 00:14:29.977 "progress": { 00:14:29.977 "blocks": 10240, 00:14:29.977 "percent": 16 00:14:29.977 } 00:14:29.977 }, 00:14:29.977 "base_bdevs_list": [ 00:14:29.977 { 00:14:29.977 "name": "spare", 00:14:29.977 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:29.977 "is_configured": true, 00:14:29.977 "data_offset": 2048, 00:14:29.977 "data_size": 63488 00:14:29.977 }, 00:14:29.977 { 00:14:29.977 "name": "BaseBdev2", 00:14:29.977 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:29.977 "is_configured": true, 00:14:29.977 "data_offset": 2048, 00:14:29.977 "data_size": 63488 00:14:29.977 } 00:14:29.977 ] 00:14:29.977 }' 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:29.977 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:29.977 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.978 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.237 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.237 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.237 "name": "raid_bdev1", 00:14:30.237 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:30.237 "strip_size_kb": 0, 00:14:30.237 "state": "online", 00:14:30.237 "raid_level": "raid1", 00:14:30.237 "superblock": true, 00:14:30.237 "num_base_bdevs": 2, 00:14:30.237 "num_base_bdevs_discovered": 2, 00:14:30.237 "num_base_bdevs_operational": 2, 00:14:30.237 "process": { 00:14:30.237 "type": "rebuild", 00:14:30.237 "target": "spare", 00:14:30.237 "progress": { 00:14:30.237 "blocks": 10240, 00:14:30.237 "percent": 16 00:14:30.237 } 00:14:30.237 }, 00:14:30.237 "base_bdevs_list": [ 00:14:30.237 { 00:14:30.237 "name": "spare", 00:14:30.237 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:30.237 "is_configured": true, 00:14:30.237 "data_offset": 2048, 00:14:30.237 "data_size": 63488 00:14:30.237 }, 00:14:30.237 { 00:14:30.237 "name": "BaseBdev2", 00:14:30.237 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:30.237 "is_configured": true, 00:14:30.237 "data_offset": 2048, 00:14:30.237 "data_size": 63488 00:14:30.237 } 00:14:30.237 ] 00:14:30.237 }' 00:14:30.237 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.237 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.237 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.237 [2024-11-05 03:25:43.745246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:30.237 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.237 03:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.803 134.75 IOPS, 404.25 MiB/s [2024-11-05T03:25:44.442Z] [2024-11-05 03:25:44.214098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:30.803 [2024-11-05 03:25:44.319426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:31.062 [2024-11-05 03:25:44.643606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:31.062 [2024-11-05 03:25:44.644356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.321 "name": "raid_bdev1", 00:14:31.321 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:31.321 "strip_size_kb": 0, 00:14:31.321 "state": "online", 00:14:31.321 "raid_level": "raid1", 00:14:31.321 "superblock": true, 00:14:31.321 "num_base_bdevs": 2, 00:14:31.321 "num_base_bdevs_discovered": 2, 00:14:31.321 "num_base_bdevs_operational": 2, 00:14:31.321 "process": { 00:14:31.321 "type": "rebuild", 00:14:31.321 "target": "spare", 00:14:31.321 "progress": { 00:14:31.321 "blocks": 26624, 00:14:31.321 "percent": 41 00:14:31.321 } 00:14:31.321 }, 00:14:31.321 "base_bdevs_list": [ 00:14:31.321 { 00:14:31.321 "name": "spare", 00:14:31.321 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:31.321 "is_configured": true, 00:14:31.321 "data_offset": 2048, 00:14:31.321 "data_size": 63488 00:14:31.321 }, 00:14:31.321 { 00:14:31.321 "name": "BaseBdev2", 00:14:31.321 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:31.321 "is_configured": true, 00:14:31.321 "data_offset": 2048, 00:14:31.321 "data_size": 63488 00:14:31.321 } 00:14:31.321 ] 00:14:31.321 }' 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.321 125.40 IOPS, 376.20 MiB/s [2024-11-05T03:25:44.960Z] [2024-11-05 03:25:44.862309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.321 03:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.580 [2024-11-05 03:25:45.089593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:31.838 [2024-11-05 03:25:45.218449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:32.097 [2024-11-05 03:25:45.575872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:32.356 112.67 IOPS, 338.00 MiB/s [2024-11-05T03:25:45.995Z] 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.356 [2024-11-05 03:25:45.956574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.356 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.356 "name": "raid_bdev1", 00:14:32.356 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:32.356 "strip_size_kb": 0, 00:14:32.356 "state": "online", 00:14:32.356 "raid_level": "raid1", 00:14:32.356 "superblock": true, 00:14:32.356 "num_base_bdevs": 2, 00:14:32.356 "num_base_bdevs_discovered": 2, 00:14:32.356 "num_base_bdevs_operational": 2, 00:14:32.356 "process": { 00:14:32.356 "type": "rebuild", 00:14:32.356 "target": "spare", 00:14:32.356 "progress": { 00:14:32.356 "blocks": 43008, 00:14:32.356 "percent": 67 00:14:32.356 } 00:14:32.356 }, 00:14:32.356 "base_bdevs_list": [ 00:14:32.356 { 00:14:32.356 "name": "spare", 00:14:32.356 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:32.356 "is_configured": true, 00:14:32.356 "data_offset": 2048, 00:14:32.356 "data_size": 63488 00:14:32.356 }, 00:14:32.356 { 00:14:32.356 "name": "BaseBdev2", 00:14:32.356 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:32.356 "is_configured": true, 00:14:32.356 "data_offset": 2048, 00:14:32.356 "data_size": 63488 00:14:32.356 } 00:14:32.356 ] 00:14:32.356 }' 00:14:32.614 03:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.614 03:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.615 03:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.615 03:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.615 03:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.551 103.43 IOPS, 310.29 MiB/s [2024-11-05T03:25:47.190Z] [2024-11-05 03:25:47.047931] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.551 [2024-11-05 03:25:47.154450] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.551 "name": "raid_bdev1", 00:14:33.551 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:33.551 "strip_size_kb": 0, 00:14:33.551 "state": "online", 00:14:33.551 "raid_level": "raid1", 00:14:33.551 "superblock": true, 00:14:33.551 "num_base_bdevs": 2, 00:14:33.551 "num_base_bdevs_discovered": 2, 00:14:33.551 "num_base_bdevs_operational": 2, 00:14:33.551 "process": { 00:14:33.551 "type": "rebuild", 00:14:33.551 "target": "spare", 00:14:33.551 "progress": { 00:14:33.551 "blocks": 63488, 00:14:33.551 "percent": 100 00:14:33.551 } 00:14:33.551 }, 00:14:33.551 "base_bdevs_list": [ 00:14:33.551 { 00:14:33.551 "name": "spare", 00:14:33.551 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:33.551 "is_configured": true, 00:14:33.551 "data_offset": 2048, 00:14:33.551 "data_size": 63488 00:14:33.551 }, 00:14:33.551 { 00:14:33.551 "name": "BaseBdev2", 00:14:33.551 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:33.551 "is_configured": true, 00:14:33.551 "data_offset": 2048, 00:14:33.551 "data_size": 63488 00:14:33.551 } 00:14:33.551 ] 00:14:33.551 }' 00:14:33.551 [2024-11-05 03:25:47.157365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.551 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.811 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.811 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.811 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.811 03:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.638 94.75 IOPS, 284.25 MiB/s [2024-11-05T03:25:48.277Z] 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.638 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.896 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.896 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.896 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.896 "name": "raid_bdev1", 00:14:34.896 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:34.896 "strip_size_kb": 0, 00:14:34.896 "state": "online", 00:14:34.896 "raid_level": "raid1", 00:14:34.896 "superblock": true, 00:14:34.896 "num_base_bdevs": 2, 00:14:34.896 "num_base_bdevs_discovered": 2, 00:14:34.896 "num_base_bdevs_operational": 2, 00:14:34.896 "base_bdevs_list": [ 00:14:34.896 { 00:14:34.896 "name": "spare", 00:14:34.896 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:34.896 "is_configured": true, 00:14:34.896 "data_offset": 2048, 00:14:34.896 "data_size": 63488 00:14:34.896 }, 00:14:34.896 { 00:14:34.897 "name": "BaseBdev2", 00:14:34.897 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:34.897 "is_configured": true, 00:14:34.897 "data_offset": 2048, 00:14:34.897 "data_size": 63488 00:14:34.897 } 00:14:34.897 ] 00:14:34.897 }' 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.897 "name": "raid_bdev1", 00:14:34.897 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:34.897 "strip_size_kb": 0, 00:14:34.897 "state": "online", 00:14:34.897 "raid_level": "raid1", 00:14:34.897 "superblock": true, 00:14:34.897 "num_base_bdevs": 2, 00:14:34.897 "num_base_bdevs_discovered": 2, 00:14:34.897 "num_base_bdevs_operational": 2, 00:14:34.897 "base_bdevs_list": [ 00:14:34.897 { 00:14:34.897 "name": "spare", 00:14:34.897 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:34.897 "is_configured": true, 00:14:34.897 "data_offset": 2048, 00:14:34.897 "data_size": 63488 00:14:34.897 }, 00:14:34.897 { 00:14:34.897 "name": "BaseBdev2", 00:14:34.897 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:34.897 "is_configured": true, 00:14:34.897 "data_offset": 2048, 00:14:34.897 "data_size": 63488 00:14:34.897 } 00:14:34.897 ] 00:14:34.897 }' 00:14:34.897 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.156 "name": "raid_bdev1", 00:14:35.156 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:35.156 "strip_size_kb": 0, 00:14:35.156 "state": "online", 00:14:35.156 "raid_level": "raid1", 00:14:35.156 "superblock": true, 00:14:35.156 "num_base_bdevs": 2, 00:14:35.156 "num_base_bdevs_discovered": 2, 00:14:35.156 "num_base_bdevs_operational": 2, 00:14:35.156 "base_bdevs_list": [ 00:14:35.156 { 00:14:35.156 "name": "spare", 00:14:35.156 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:35.156 "is_configured": true, 00:14:35.156 "data_offset": 2048, 00:14:35.156 "data_size": 63488 00:14:35.156 }, 00:14:35.156 { 00:14:35.156 "name": "BaseBdev2", 00:14:35.156 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:35.156 "is_configured": true, 00:14:35.156 "data_offset": 2048, 00:14:35.156 "data_size": 63488 00:14:35.156 } 00:14:35.156 ] 00:14:35.156 }' 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.156 03:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.674 87.78 IOPS, 263.33 MiB/s [2024-11-05T03:25:49.313Z] 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.674 [2024-11-05 03:25:49.131385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.674 [2024-11-05 03:25:49.131440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.674 00:14:35.674 Latency(us) 00:14:35.674 [2024-11-05T03:25:49.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.674 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:35.674 raid_bdev1 : 9.38 84.97 254.90 0.00 0.00 16808.84 253.21 118203.11 00:14:35.674 [2024-11-05T03:25:49.313Z] =================================================================================================================== 00:14:35.674 [2024-11-05T03:25:49.313Z] Total : 84.97 254.90 0.00 0.00 16808.84 253.21 118203.11 00:14:35.674 [2024-11-05 03:25:49.237135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.674 [2024-11-05 03:25:49.237198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.674 [2024-11-05 03:25:49.237293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.674 [2024-11-05 03:25:49.237342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.674 { 00:14:35.674 "results": [ 00:14:35.674 { 00:14:35.674 "job": "raid_bdev1", 00:14:35.674 "core_mask": "0x1", 00:14:35.674 "workload": "randrw", 00:14:35.674 "percentage": 50, 00:14:35.674 "status": "finished", 00:14:35.674 "queue_depth": 2, 00:14:35.674 "io_size": 3145728, 00:14:35.674 "runtime": 9.380038, 00:14:35.674 "iops": 84.96767283885204, 00:14:35.674 "mibps": 254.90301851655613, 00:14:35.674 "io_failed": 0, 00:14:35.674 "io_timeout": 0, 00:14:35.674 "avg_latency_us": 16808.840278316413, 00:14:35.674 "min_latency_us": 253.20727272727274, 00:14:35.674 "max_latency_us": 118203.11272727273 00:14:35.674 } 00:14:35.674 ], 00:14:35.674 "core_count": 1 00:14:35.674 } 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.674 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:36.241 /dev/nbd0 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.241 1+0 records in 00:14:36.241 1+0 records out 00:14:36.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348553 s, 11.8 MB/s 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.241 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:36.500 /dev/nbd1 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.500 1+0 records in 00:14:36.500 1+0 records out 00:14:36.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409422 s, 10.0 MB/s 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.500 03:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:36.500 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:36.500 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.500 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:36.500 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.500 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:36.500 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.500 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:36.759 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.018 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.324 [2024-11-05 03:25:50.722908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.324 [2024-11-05 03:25:50.723018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.324 [2024-11-05 03:25:50.723050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:37.324 [2024-11-05 03:25:50.723066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.324 [2024-11-05 03:25:50.726518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.324 [2024-11-05 03:25:50.726586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.324 [2024-11-05 03:25:50.726744] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:37.324 [2024-11-05 03:25:50.726884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.324 [2024-11-05 03:25:50.727081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.324 spare 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.324 [2024-11-05 03:25:50.827309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:37.324 [2024-11-05 03:25:50.827350] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.324 [2024-11-05 03:25:50.827862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:37.324 [2024-11-05 03:25:50.828111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:37.324 [2024-11-05 03:25:50.828147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:37.324 [2024-11-05 03:25:50.828392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.324 "name": "raid_bdev1", 00:14:37.324 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:37.324 "strip_size_kb": 0, 00:14:37.324 "state": "online", 00:14:37.324 "raid_level": "raid1", 00:14:37.324 "superblock": true, 00:14:37.324 "num_base_bdevs": 2, 00:14:37.324 "num_base_bdevs_discovered": 2, 00:14:37.324 "num_base_bdevs_operational": 2, 00:14:37.324 "base_bdevs_list": [ 00:14:37.324 { 00:14:37.324 "name": "spare", 00:14:37.324 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:37.324 "is_configured": true, 00:14:37.324 "data_offset": 2048, 00:14:37.324 "data_size": 63488 00:14:37.324 }, 00:14:37.324 { 00:14:37.324 "name": "BaseBdev2", 00:14:37.324 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:37.324 "is_configured": true, 00:14:37.324 "data_offset": 2048, 00:14:37.324 "data_size": 63488 00:14:37.324 } 00:14:37.324 ] 00:14:37.324 }' 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.324 03:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.892 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.892 "name": "raid_bdev1", 00:14:37.892 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:37.892 "strip_size_kb": 0, 00:14:37.892 "state": "online", 00:14:37.892 "raid_level": "raid1", 00:14:37.892 "superblock": true, 00:14:37.892 "num_base_bdevs": 2, 00:14:37.892 "num_base_bdevs_discovered": 2, 00:14:37.892 "num_base_bdevs_operational": 2, 00:14:37.892 "base_bdevs_list": [ 00:14:37.892 { 00:14:37.892 "name": "spare", 00:14:37.892 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:37.892 "is_configured": true, 00:14:37.893 "data_offset": 2048, 00:14:37.893 "data_size": 63488 00:14:37.893 }, 00:14:37.893 { 00:14:37.893 "name": "BaseBdev2", 00:14:37.893 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:37.893 "is_configured": true, 00:14:37.893 "data_offset": 2048, 00:14:37.893 "data_size": 63488 00:14:37.893 } 00:14:37.893 ] 00:14:37.893 }' 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.893 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.152 [2024-11-05 03:25:51.535593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.152 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.152 "name": "raid_bdev1", 00:14:38.152 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:38.152 "strip_size_kb": 0, 00:14:38.152 "state": "online", 00:14:38.152 "raid_level": "raid1", 00:14:38.152 "superblock": true, 00:14:38.152 "num_base_bdevs": 2, 00:14:38.152 "num_base_bdevs_discovered": 1, 00:14:38.152 "num_base_bdevs_operational": 1, 00:14:38.152 "base_bdevs_list": [ 00:14:38.152 { 00:14:38.152 "name": null, 00:14:38.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.152 "is_configured": false, 00:14:38.152 "data_offset": 0, 00:14:38.152 "data_size": 63488 00:14:38.152 }, 00:14:38.152 { 00:14:38.152 "name": "BaseBdev2", 00:14:38.153 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:38.153 "is_configured": true, 00:14:38.153 "data_offset": 2048, 00:14:38.153 "data_size": 63488 00:14:38.153 } 00:14:38.153 ] 00:14:38.153 }' 00:14:38.153 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.153 03:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.412 03:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.412 03:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.412 03:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.412 [2024-11-05 03:25:52.043869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.412 [2024-11-05 03:25:52.044111] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:38.412 [2024-11-05 03:25:52.044150] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:38.412 [2024-11-05 03:25:52.044238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.670 [2024-11-05 03:25:52.060964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:38.670 03:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.670 03:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:38.671 [2024-11-05 03:25:52.063825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.606 "name": "raid_bdev1", 00:14:39.606 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:39.606 "strip_size_kb": 0, 00:14:39.606 "state": "online", 00:14:39.606 "raid_level": "raid1", 00:14:39.606 "superblock": true, 00:14:39.606 "num_base_bdevs": 2, 00:14:39.606 "num_base_bdevs_discovered": 2, 00:14:39.606 "num_base_bdevs_operational": 2, 00:14:39.606 "process": { 00:14:39.606 "type": "rebuild", 00:14:39.606 "target": "spare", 00:14:39.606 "progress": { 00:14:39.606 "blocks": 20480, 00:14:39.606 "percent": 32 00:14:39.606 } 00:14:39.606 }, 00:14:39.606 "base_bdevs_list": [ 00:14:39.606 { 00:14:39.606 "name": "spare", 00:14:39.606 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:39.606 "is_configured": true, 00:14:39.606 "data_offset": 2048, 00:14:39.606 "data_size": 63488 00:14:39.606 }, 00:14:39.606 { 00:14:39.606 "name": "BaseBdev2", 00:14:39.606 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:39.606 "is_configured": true, 00:14:39.606 "data_offset": 2048, 00:14:39.606 "data_size": 63488 00:14:39.606 } 00:14:39.606 ] 00:14:39.606 }' 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.606 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.606 [2024-11-05 03:25:53.229076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.865 [2024-11-05 03:25:53.272845] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.865 [2024-11-05 03:25:53.273008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.865 [2024-11-05 03:25:53.273035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.865 [2024-11-05 03:25:53.273046] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.865 "name": "raid_bdev1", 00:14:39.865 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:39.865 "strip_size_kb": 0, 00:14:39.865 "state": "online", 00:14:39.865 "raid_level": "raid1", 00:14:39.865 "superblock": true, 00:14:39.865 "num_base_bdevs": 2, 00:14:39.865 "num_base_bdevs_discovered": 1, 00:14:39.865 "num_base_bdevs_operational": 1, 00:14:39.865 "base_bdevs_list": [ 00:14:39.865 { 00:14:39.865 "name": null, 00:14:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.865 "is_configured": false, 00:14:39.865 "data_offset": 0, 00:14:39.865 "data_size": 63488 00:14:39.865 }, 00:14:39.865 { 00:14:39.865 "name": "BaseBdev2", 00:14:39.865 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:39.865 "is_configured": true, 00:14:39.865 "data_offset": 2048, 00:14:39.865 "data_size": 63488 00:14:39.865 } 00:14:39.865 ] 00:14:39.865 }' 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.865 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.433 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:40.433 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.433 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.433 [2024-11-05 03:25:53.820548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:40.433 [2024-11-05 03:25:53.820683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.433 [2024-11-05 03:25:53.820751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:40.433 [2024-11-05 03:25:53.820765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.433 [2024-11-05 03:25:53.821460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.433 [2024-11-05 03:25:53.821501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:40.433 [2024-11-05 03:25:53.821628] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:40.433 [2024-11-05 03:25:53.821647] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:40.433 [2024-11-05 03:25:53.821683] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:40.433 [2024-11-05 03:25:53.821718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.433 [2024-11-05 03:25:53.838632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:40.433 spare 00:14:40.433 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.433 03:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:40.433 [2024-11-05 03:25:53.841602] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.370 "name": "raid_bdev1", 00:14:41.370 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:41.370 "strip_size_kb": 0, 00:14:41.370 "state": "online", 00:14:41.370 "raid_level": "raid1", 00:14:41.370 "superblock": true, 00:14:41.370 "num_base_bdevs": 2, 00:14:41.370 "num_base_bdevs_discovered": 2, 00:14:41.370 "num_base_bdevs_operational": 2, 00:14:41.370 "process": { 00:14:41.370 "type": "rebuild", 00:14:41.370 "target": "spare", 00:14:41.370 "progress": { 00:14:41.370 "blocks": 20480, 00:14:41.370 "percent": 32 00:14:41.370 } 00:14:41.370 }, 00:14:41.370 "base_bdevs_list": [ 00:14:41.370 { 00:14:41.370 "name": "spare", 00:14:41.370 "uuid": "a73194c4-3b94-591c-9546-af0cea4ef1e5", 00:14:41.370 "is_configured": true, 00:14:41.370 "data_offset": 2048, 00:14:41.370 "data_size": 63488 00:14:41.370 }, 00:14:41.370 { 00:14:41.370 "name": "BaseBdev2", 00:14:41.370 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:41.370 "is_configured": true, 00:14:41.370 "data_offset": 2048, 00:14:41.370 "data_size": 63488 00:14:41.370 } 00:14:41.370 ] 00:14:41.370 }' 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.370 03:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.629 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.629 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.629 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.629 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.629 [2024-11-05 03:25:55.019102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.629 [2024-11-05 03:25:55.051026] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.629 [2024-11-05 03:25:55.051159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.629 [2024-11-05 03:25:55.051185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.630 [2024-11-05 03:25:55.051199] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.630 "name": "raid_bdev1", 00:14:41.630 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:41.630 "strip_size_kb": 0, 00:14:41.630 "state": "online", 00:14:41.630 "raid_level": "raid1", 00:14:41.630 "superblock": true, 00:14:41.630 "num_base_bdevs": 2, 00:14:41.630 "num_base_bdevs_discovered": 1, 00:14:41.630 "num_base_bdevs_operational": 1, 00:14:41.630 "base_bdevs_list": [ 00:14:41.630 { 00:14:41.630 "name": null, 00:14:41.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.630 "is_configured": false, 00:14:41.630 "data_offset": 0, 00:14:41.630 "data_size": 63488 00:14:41.630 }, 00:14:41.630 { 00:14:41.630 "name": "BaseBdev2", 00:14:41.630 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:41.630 "is_configured": true, 00:14:41.630 "data_offset": 2048, 00:14:41.630 "data_size": 63488 00:14:41.630 } 00:14:41.630 ] 00:14:41.630 }' 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.630 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.244 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.244 "name": "raid_bdev1", 00:14:42.244 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:42.244 "strip_size_kb": 0, 00:14:42.244 "state": "online", 00:14:42.244 "raid_level": "raid1", 00:14:42.244 "superblock": true, 00:14:42.244 "num_base_bdevs": 2, 00:14:42.244 "num_base_bdevs_discovered": 1, 00:14:42.244 "num_base_bdevs_operational": 1, 00:14:42.244 "base_bdevs_list": [ 00:14:42.244 { 00:14:42.244 "name": null, 00:14:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.244 "is_configured": false, 00:14:42.244 "data_offset": 0, 00:14:42.244 "data_size": 63488 00:14:42.244 }, 00:14:42.244 { 00:14:42.244 "name": "BaseBdev2", 00:14:42.245 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:42.245 "is_configured": true, 00:14:42.245 "data_offset": 2048, 00:14:42.245 "data_size": 63488 00:14:42.245 } 00:14:42.245 ] 00:14:42.245 }' 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.245 [2024-11-05 03:25:55.786453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:42.245 [2024-11-05 03:25:55.786543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.245 [2024-11-05 03:25:55.786570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:42.245 [2024-11-05 03:25:55.786586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.245 [2024-11-05 03:25:55.787234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.245 [2024-11-05 03:25:55.787295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:42.245 [2024-11-05 03:25:55.787421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:42.245 [2024-11-05 03:25:55.787446] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:42.245 [2024-11-05 03:25:55.787456] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:42.245 [2024-11-05 03:25:55.787473] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:42.245 BaseBdev1 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.245 03:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.183 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.442 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.442 "name": "raid_bdev1", 00:14:43.442 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:43.442 "strip_size_kb": 0, 00:14:43.442 "state": "online", 00:14:43.442 "raid_level": "raid1", 00:14:43.442 "superblock": true, 00:14:43.442 "num_base_bdevs": 2, 00:14:43.442 "num_base_bdevs_discovered": 1, 00:14:43.442 "num_base_bdevs_operational": 1, 00:14:43.442 "base_bdevs_list": [ 00:14:43.442 { 00:14:43.442 "name": null, 00:14:43.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.442 "is_configured": false, 00:14:43.442 "data_offset": 0, 00:14:43.442 "data_size": 63488 00:14:43.442 }, 00:14:43.442 { 00:14:43.442 "name": "BaseBdev2", 00:14:43.442 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:43.442 "is_configured": true, 00:14:43.442 "data_offset": 2048, 00:14:43.442 "data_size": 63488 00:14:43.442 } 00:14:43.442 ] 00:14:43.442 }' 00:14:43.442 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.442 03:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.701 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.961 "name": "raid_bdev1", 00:14:43.961 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:43.961 "strip_size_kb": 0, 00:14:43.961 "state": "online", 00:14:43.961 "raid_level": "raid1", 00:14:43.961 "superblock": true, 00:14:43.961 "num_base_bdevs": 2, 00:14:43.961 "num_base_bdevs_discovered": 1, 00:14:43.961 "num_base_bdevs_operational": 1, 00:14:43.961 "base_bdevs_list": [ 00:14:43.961 { 00:14:43.961 "name": null, 00:14:43.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.961 "is_configured": false, 00:14:43.961 "data_offset": 0, 00:14:43.961 "data_size": 63488 00:14:43.961 }, 00:14:43.961 { 00:14:43.961 "name": "BaseBdev2", 00:14:43.961 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:43.961 "is_configured": true, 00:14:43.961 "data_offset": 2048, 00:14:43.961 "data_size": 63488 00:14:43.961 } 00:14:43.961 ] 00:14:43.961 }' 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.961 [2024-11-05 03:25:57.507172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.961 [2024-11-05 03:25:57.507404] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:43.961 [2024-11-05 03:25:57.507423] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:43.961 request: 00:14:43.961 { 00:14:43.961 "base_bdev": "BaseBdev1", 00:14:43.961 "raid_bdev": "raid_bdev1", 00:14:43.961 "method": "bdev_raid_add_base_bdev", 00:14:43.961 "req_id": 1 00:14:43.961 } 00:14:43.961 Got JSON-RPC error response 00:14:43.961 response: 00:14:43.961 { 00:14:43.961 "code": -22, 00:14:43.961 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:43.961 } 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.961 03:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.898 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.157 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.157 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.157 "name": "raid_bdev1", 00:14:45.157 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:45.157 "strip_size_kb": 0, 00:14:45.157 "state": "online", 00:14:45.157 "raid_level": "raid1", 00:14:45.157 "superblock": true, 00:14:45.157 "num_base_bdevs": 2, 00:14:45.157 "num_base_bdevs_discovered": 1, 00:14:45.157 "num_base_bdevs_operational": 1, 00:14:45.157 "base_bdevs_list": [ 00:14:45.157 { 00:14:45.157 "name": null, 00:14:45.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.157 "is_configured": false, 00:14:45.157 "data_offset": 0, 00:14:45.157 "data_size": 63488 00:14:45.157 }, 00:14:45.157 { 00:14:45.157 "name": "BaseBdev2", 00:14:45.157 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:45.157 "is_configured": true, 00:14:45.157 "data_offset": 2048, 00:14:45.157 "data_size": 63488 00:14:45.157 } 00:14:45.157 ] 00:14:45.157 }' 00:14:45.157 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.157 03:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.416 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.675 "name": "raid_bdev1", 00:14:45.675 "uuid": "15d5ac2c-6742-488e-8bbd-9b752d57d9be", 00:14:45.675 "strip_size_kb": 0, 00:14:45.675 "state": "online", 00:14:45.675 "raid_level": "raid1", 00:14:45.675 "superblock": true, 00:14:45.675 "num_base_bdevs": 2, 00:14:45.675 "num_base_bdevs_discovered": 1, 00:14:45.675 "num_base_bdevs_operational": 1, 00:14:45.675 "base_bdevs_list": [ 00:14:45.675 { 00:14:45.675 "name": null, 00:14:45.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.675 "is_configured": false, 00:14:45.675 "data_offset": 0, 00:14:45.675 "data_size": 63488 00:14:45.675 }, 00:14:45.675 { 00:14:45.675 "name": "BaseBdev2", 00:14:45.675 "uuid": "08af19fa-6149-584a-838b-c45e7e35235b", 00:14:45.675 "is_configured": true, 00:14:45.675 "data_offset": 2048, 00:14:45.675 "data_size": 63488 00:14:45.675 } 00:14:45.675 ] 00:14:45.675 }' 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76832 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 76832 ']' 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 76832 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76832 00:14:45.675 killing process with pid 76832 00:14:45.675 Received shutdown signal, test time was about 19.401800 seconds 00:14:45.675 00:14:45.675 Latency(us) 00:14:45.675 [2024-11-05T03:25:59.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.675 [2024-11-05T03:25:59.314Z] =================================================================================================================== 00:14:45.675 [2024-11-05T03:25:59.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76832' 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 76832 00:14:45.675 [2024-11-05 03:25:59.241045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.675 03:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 76832 00:14:45.675 [2024-11-05 03:25:59.241189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.675 [2024-11-05 03:25:59.241258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.675 [2024-11-05 03:25:59.241272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:45.935 [2024-11-05 03:25:59.416271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:46.877 00:14:46.877 real 0m22.505s 00:14:46.877 user 0m30.404s 00:14:46.877 sys 0m2.033s 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.877 ************************************ 00:14:46.877 END TEST raid_rebuild_test_sb_io 00:14:46.877 ************************************ 00:14:46.877 03:26:00 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:46.877 03:26:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:46.877 03:26:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:46.877 03:26:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.877 03:26:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.877 ************************************ 00:14:46.877 START TEST raid_rebuild_test 00:14:46.877 ************************************ 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:46.877 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:46.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77551 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77551 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77551 ']' 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.878 03:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.136 [2024-11-05 03:26:00.553163] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:47.136 [2024-11-05 03:26:00.553723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77551 ] 00:14:47.137 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:47.137 Zero copy mechanism will not be used. 00:14:47.137 [2024-11-05 03:26:00.735477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.395 [2024-11-05 03:26:00.854953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.653 [2024-11-05 03:26:01.038293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.653 [2024-11-05 03:26:01.038362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.912 BaseBdev1_malloc 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.912 [2024-11-05 03:26:01.515815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.912 [2024-11-05 03:26:01.515928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.912 [2024-11-05 03:26:01.515969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:47.912 [2024-11-05 03:26:01.515986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.912 [2024-11-05 03:26:01.519070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.912 [2024-11-05 03:26:01.519344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.912 BaseBdev1 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.912 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 BaseBdev2_malloc 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 [2024-11-05 03:26:01.569239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:48.172 [2024-11-05 03:26:01.569388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.172 [2024-11-05 03:26:01.569417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:48.172 [2024-11-05 03:26:01.569452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.172 [2024-11-05 03:26:01.572446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.172 [2024-11-05 03:26:01.572666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:48.172 BaseBdev2 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 BaseBdev3_malloc 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 [2024-11-05 03:26:01.632754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:48.172 [2024-11-05 03:26:01.632832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.172 [2024-11-05 03:26:01.632860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:48.172 [2024-11-05 03:26:01.632878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.172 [2024-11-05 03:26:01.635797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.172 [2024-11-05 03:26:01.636017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:48.172 BaseBdev3 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 BaseBdev4_malloc 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 [2024-11-05 03:26:01.687630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:48.172 [2024-11-05 03:26:01.687746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.172 [2024-11-05 03:26:01.687772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:48.172 [2024-11-05 03:26:01.687789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.172 [2024-11-05 03:26:01.690704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.172 [2024-11-05 03:26:01.690786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:48.172 BaseBdev4 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 spare_malloc 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 spare_delay 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.172 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.172 [2024-11-05 03:26:01.745403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:48.172 [2024-11-05 03:26:01.745489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.173 [2024-11-05 03:26:01.745517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:48.173 [2024-11-05 03:26:01.745534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.173 [2024-11-05 03:26:01.748480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.173 [2024-11-05 03:26:01.748542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:48.173 spare 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.173 [2024-11-05 03:26:01.757483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.173 [2024-11-05 03:26:01.760069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.173 [2024-11-05 03:26:01.760165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.173 [2024-11-05 03:26:01.760234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:48.173 [2024-11-05 03:26:01.760345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:48.173 [2024-11-05 03:26:01.760381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:48.173 [2024-11-05 03:26:01.760678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:48.173 [2024-11-05 03:26:01.760877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:48.173 [2024-11-05 03:26:01.760894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:48.173 [2024-11-05 03:26:01.761053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.173 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.432 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.432 "name": "raid_bdev1", 00:14:48.432 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:14:48.432 "strip_size_kb": 0, 00:14:48.432 "state": "online", 00:14:48.432 "raid_level": "raid1", 00:14:48.432 "superblock": false, 00:14:48.432 "num_base_bdevs": 4, 00:14:48.432 "num_base_bdevs_discovered": 4, 00:14:48.432 "num_base_bdevs_operational": 4, 00:14:48.432 "base_bdevs_list": [ 00:14:48.432 { 00:14:48.432 "name": "BaseBdev1", 00:14:48.432 "uuid": "130d1da8-7b1c-520b-b19f-b54593def077", 00:14:48.432 "is_configured": true, 00:14:48.432 "data_offset": 0, 00:14:48.432 "data_size": 65536 00:14:48.432 }, 00:14:48.432 { 00:14:48.432 "name": "BaseBdev2", 00:14:48.432 "uuid": "0de870d6-d5e0-59de-b756-2dd5192f03e2", 00:14:48.432 "is_configured": true, 00:14:48.432 "data_offset": 0, 00:14:48.432 "data_size": 65536 00:14:48.432 }, 00:14:48.432 { 00:14:48.432 "name": "BaseBdev3", 00:14:48.432 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:14:48.432 "is_configured": true, 00:14:48.432 "data_offset": 0, 00:14:48.432 "data_size": 65536 00:14:48.432 }, 00:14:48.432 { 00:14:48.432 "name": "BaseBdev4", 00:14:48.432 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:14:48.432 "is_configured": true, 00:14:48.432 "data_offset": 0, 00:14:48.432 "data_size": 65536 00:14:48.432 } 00:14:48.432 ] 00:14:48.432 }' 00:14:48.432 03:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.432 03:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.690 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.690 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:48.690 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.690 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.690 [2024-11-05 03:26:02.282118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.690 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.949 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:49.208 [2024-11-05 03:26:02.677842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:49.208 /dev/nbd0 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.208 1+0 records in 00:14:49.208 1+0 records out 00:14:49.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494749 s, 8.3 MB/s 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:49.208 03:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:57.330 65536+0 records in 00:14:57.330 65536+0 records out 00:14:57.330 33554432 bytes (34 MB, 32 MiB) copied, 7.68477 s, 4.4 MB/s 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.330 [2024-11-05 03:26:10.711971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.330 [2024-11-05 03:26:10.728873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.330 "name": "raid_bdev1", 00:14:57.330 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:14:57.330 "strip_size_kb": 0, 00:14:57.330 "state": "online", 00:14:57.330 "raid_level": "raid1", 00:14:57.330 "superblock": false, 00:14:57.330 "num_base_bdevs": 4, 00:14:57.330 "num_base_bdevs_discovered": 3, 00:14:57.330 "num_base_bdevs_operational": 3, 00:14:57.330 "base_bdevs_list": [ 00:14:57.330 { 00:14:57.330 "name": null, 00:14:57.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.330 "is_configured": false, 00:14:57.330 "data_offset": 0, 00:14:57.330 "data_size": 65536 00:14:57.330 }, 00:14:57.330 { 00:14:57.330 "name": "BaseBdev2", 00:14:57.330 "uuid": "0de870d6-d5e0-59de-b756-2dd5192f03e2", 00:14:57.330 "is_configured": true, 00:14:57.330 "data_offset": 0, 00:14:57.330 "data_size": 65536 00:14:57.330 }, 00:14:57.330 { 00:14:57.330 "name": "BaseBdev3", 00:14:57.330 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:14:57.330 "is_configured": true, 00:14:57.330 "data_offset": 0, 00:14:57.330 "data_size": 65536 00:14:57.330 }, 00:14:57.330 { 00:14:57.330 "name": "BaseBdev4", 00:14:57.330 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:14:57.330 "is_configured": true, 00:14:57.330 "data_offset": 0, 00:14:57.330 "data_size": 65536 00:14:57.330 } 00:14:57.330 ] 00:14:57.330 }' 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.330 03:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.589 03:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.589 03:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.589 03:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.589 [2024-11-05 03:26:11.200966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.589 [2024-11-05 03:26:11.214268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:57.589 03:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.589 03:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:57.589 [2024-11-05 03:26:11.217021] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.967 "name": "raid_bdev1", 00:14:58.967 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:14:58.967 "strip_size_kb": 0, 00:14:58.967 "state": "online", 00:14:58.967 "raid_level": "raid1", 00:14:58.967 "superblock": false, 00:14:58.967 "num_base_bdevs": 4, 00:14:58.967 "num_base_bdevs_discovered": 4, 00:14:58.967 "num_base_bdevs_operational": 4, 00:14:58.967 "process": { 00:14:58.967 "type": "rebuild", 00:14:58.967 "target": "spare", 00:14:58.967 "progress": { 00:14:58.967 "blocks": 20480, 00:14:58.967 "percent": 31 00:14:58.967 } 00:14:58.967 }, 00:14:58.967 "base_bdevs_list": [ 00:14:58.967 { 00:14:58.967 "name": "spare", 00:14:58.967 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:14:58.967 "is_configured": true, 00:14:58.967 "data_offset": 0, 00:14:58.967 "data_size": 65536 00:14:58.967 }, 00:14:58.967 { 00:14:58.967 "name": "BaseBdev2", 00:14:58.967 "uuid": "0de870d6-d5e0-59de-b756-2dd5192f03e2", 00:14:58.967 "is_configured": true, 00:14:58.967 "data_offset": 0, 00:14:58.967 "data_size": 65536 00:14:58.967 }, 00:14:58.967 { 00:14:58.967 "name": "BaseBdev3", 00:14:58.967 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:14:58.967 "is_configured": true, 00:14:58.967 "data_offset": 0, 00:14:58.967 "data_size": 65536 00:14:58.967 }, 00:14:58.967 { 00:14:58.967 "name": "BaseBdev4", 00:14:58.967 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:14:58.967 "is_configured": true, 00:14:58.967 "data_offset": 0, 00:14:58.967 "data_size": 65536 00:14:58.967 } 00:14:58.967 ] 00:14:58.967 }' 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.967 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.967 [2024-11-05 03:26:12.382455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.967 [2024-11-05 03:26:12.426211] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.967 [2024-11-05 03:26:12.426375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.967 [2024-11-05 03:26:12.426402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.967 [2024-11-05 03:26:12.426417] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.968 "name": "raid_bdev1", 00:14:58.968 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:14:58.968 "strip_size_kb": 0, 00:14:58.968 "state": "online", 00:14:58.968 "raid_level": "raid1", 00:14:58.968 "superblock": false, 00:14:58.968 "num_base_bdevs": 4, 00:14:58.968 "num_base_bdevs_discovered": 3, 00:14:58.968 "num_base_bdevs_operational": 3, 00:14:58.968 "base_bdevs_list": [ 00:14:58.968 { 00:14:58.968 "name": null, 00:14:58.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.968 "is_configured": false, 00:14:58.968 "data_offset": 0, 00:14:58.968 "data_size": 65536 00:14:58.968 }, 00:14:58.968 { 00:14:58.968 "name": "BaseBdev2", 00:14:58.968 "uuid": "0de870d6-d5e0-59de-b756-2dd5192f03e2", 00:14:58.968 "is_configured": true, 00:14:58.968 "data_offset": 0, 00:14:58.968 "data_size": 65536 00:14:58.968 }, 00:14:58.968 { 00:14:58.968 "name": "BaseBdev3", 00:14:58.968 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:14:58.968 "is_configured": true, 00:14:58.968 "data_offset": 0, 00:14:58.968 "data_size": 65536 00:14:58.968 }, 00:14:58.968 { 00:14:58.968 "name": "BaseBdev4", 00:14:58.968 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:14:58.968 "is_configured": true, 00:14:58.968 "data_offset": 0, 00:14:58.968 "data_size": 65536 00:14:58.968 } 00:14:58.968 ] 00:14:58.968 }' 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.968 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.536 03:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.536 "name": "raid_bdev1", 00:14:59.536 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:14:59.536 "strip_size_kb": 0, 00:14:59.536 "state": "online", 00:14:59.536 "raid_level": "raid1", 00:14:59.536 "superblock": false, 00:14:59.536 "num_base_bdevs": 4, 00:14:59.536 "num_base_bdevs_discovered": 3, 00:14:59.536 "num_base_bdevs_operational": 3, 00:14:59.536 "base_bdevs_list": [ 00:14:59.536 { 00:14:59.536 "name": null, 00:14:59.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.536 "is_configured": false, 00:14:59.536 "data_offset": 0, 00:14:59.536 "data_size": 65536 00:14:59.536 }, 00:14:59.536 { 00:14:59.536 "name": "BaseBdev2", 00:14:59.536 "uuid": "0de870d6-d5e0-59de-b756-2dd5192f03e2", 00:14:59.536 "is_configured": true, 00:14:59.536 "data_offset": 0, 00:14:59.536 "data_size": 65536 00:14:59.536 }, 00:14:59.536 { 00:14:59.536 "name": "BaseBdev3", 00:14:59.536 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:14:59.536 "is_configured": true, 00:14:59.536 "data_offset": 0, 00:14:59.536 "data_size": 65536 00:14:59.536 }, 00:14:59.536 { 00:14:59.536 "name": "BaseBdev4", 00:14:59.536 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:14:59.536 "is_configured": true, 00:14:59.536 "data_offset": 0, 00:14:59.536 "data_size": 65536 00:14:59.536 } 00:14:59.536 ] 00:14:59.536 }' 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.536 [2024-11-05 03:26:13.129781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.536 [2024-11-05 03:26:13.142813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.536 03:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:59.536 [2024-11-05 03:26:13.145303] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.950 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.950 "name": "raid_bdev1", 00:15:00.950 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:15:00.950 "strip_size_kb": 0, 00:15:00.950 "state": "online", 00:15:00.950 "raid_level": "raid1", 00:15:00.950 "superblock": false, 00:15:00.950 "num_base_bdevs": 4, 00:15:00.950 "num_base_bdevs_discovered": 4, 00:15:00.950 "num_base_bdevs_operational": 4, 00:15:00.950 "process": { 00:15:00.950 "type": "rebuild", 00:15:00.950 "target": "spare", 00:15:00.950 "progress": { 00:15:00.950 "blocks": 20480, 00:15:00.950 "percent": 31 00:15:00.950 } 00:15:00.950 }, 00:15:00.950 "base_bdevs_list": [ 00:15:00.950 { 00:15:00.950 "name": "spare", 00:15:00.950 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:15:00.950 "is_configured": true, 00:15:00.950 "data_offset": 0, 00:15:00.950 "data_size": 65536 00:15:00.950 }, 00:15:00.950 { 00:15:00.950 "name": "BaseBdev2", 00:15:00.950 "uuid": "0de870d6-d5e0-59de-b756-2dd5192f03e2", 00:15:00.950 "is_configured": true, 00:15:00.950 "data_offset": 0, 00:15:00.950 "data_size": 65536 00:15:00.950 }, 00:15:00.950 { 00:15:00.950 "name": "BaseBdev3", 00:15:00.950 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:15:00.950 "is_configured": true, 00:15:00.950 "data_offset": 0, 00:15:00.950 "data_size": 65536 00:15:00.950 }, 00:15:00.950 { 00:15:00.951 "name": "BaseBdev4", 00:15:00.951 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 } 00:15:00.951 ] 00:15:00.951 }' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.951 [2024-11-05 03:26:14.318773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.951 [2024-11-05 03:26:14.353950] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.951 "name": "raid_bdev1", 00:15:00.951 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:15:00.951 "strip_size_kb": 0, 00:15:00.951 "state": "online", 00:15:00.951 "raid_level": "raid1", 00:15:00.951 "superblock": false, 00:15:00.951 "num_base_bdevs": 4, 00:15:00.951 "num_base_bdevs_discovered": 3, 00:15:00.951 "num_base_bdevs_operational": 3, 00:15:00.951 "process": { 00:15:00.951 "type": "rebuild", 00:15:00.951 "target": "spare", 00:15:00.951 "progress": { 00:15:00.951 "blocks": 24576, 00:15:00.951 "percent": 37 00:15:00.951 } 00:15:00.951 }, 00:15:00.951 "base_bdevs_list": [ 00:15:00.951 { 00:15:00.951 "name": "spare", 00:15:00.951 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "name": null, 00:15:00.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.951 "is_configured": false, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "name": "BaseBdev3", 00:15:00.951 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "name": "BaseBdev4", 00:15:00.951 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 } 00:15:00.951 ] 00:15:00.951 }' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=476 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.951 "name": "raid_bdev1", 00:15:00.951 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:15:00.951 "strip_size_kb": 0, 00:15:00.951 "state": "online", 00:15:00.951 "raid_level": "raid1", 00:15:00.951 "superblock": false, 00:15:00.951 "num_base_bdevs": 4, 00:15:00.951 "num_base_bdevs_discovered": 3, 00:15:00.951 "num_base_bdevs_operational": 3, 00:15:00.951 "process": { 00:15:00.951 "type": "rebuild", 00:15:00.951 "target": "spare", 00:15:00.951 "progress": { 00:15:00.951 "blocks": 26624, 00:15:00.951 "percent": 40 00:15:00.951 } 00:15:00.951 }, 00:15:00.951 "base_bdevs_list": [ 00:15:00.951 { 00:15:00.951 "name": "spare", 00:15:00.951 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "name": null, 00:15:00.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.951 "is_configured": false, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "name": "BaseBdev3", 00:15:00.951 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "name": "BaseBdev4", 00:15:00.951 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 0, 00:15:00.951 "data_size": 65536 00:15:00.951 } 00:15:00.951 ] 00:15:00.951 }' 00:15:00.951 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.211 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.211 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.211 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.211 03:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.149 "name": "raid_bdev1", 00:15:02.149 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:15:02.149 "strip_size_kb": 0, 00:15:02.149 "state": "online", 00:15:02.149 "raid_level": "raid1", 00:15:02.149 "superblock": false, 00:15:02.149 "num_base_bdevs": 4, 00:15:02.149 "num_base_bdevs_discovered": 3, 00:15:02.149 "num_base_bdevs_operational": 3, 00:15:02.149 "process": { 00:15:02.149 "type": "rebuild", 00:15:02.149 "target": "spare", 00:15:02.149 "progress": { 00:15:02.149 "blocks": 51200, 00:15:02.149 "percent": 78 00:15:02.149 } 00:15:02.149 }, 00:15:02.149 "base_bdevs_list": [ 00:15:02.149 { 00:15:02.149 "name": "spare", 00:15:02.149 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:15:02.149 "is_configured": true, 00:15:02.149 "data_offset": 0, 00:15:02.149 "data_size": 65536 00:15:02.149 }, 00:15:02.149 { 00:15:02.149 "name": null, 00:15:02.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.149 "is_configured": false, 00:15:02.149 "data_offset": 0, 00:15:02.149 "data_size": 65536 00:15:02.149 }, 00:15:02.149 { 00:15:02.149 "name": "BaseBdev3", 00:15:02.149 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:15:02.149 "is_configured": true, 00:15:02.149 "data_offset": 0, 00:15:02.149 "data_size": 65536 00:15:02.149 }, 00:15:02.149 { 00:15:02.149 "name": "BaseBdev4", 00:15:02.149 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:15:02.149 "is_configured": true, 00:15:02.149 "data_offset": 0, 00:15:02.149 "data_size": 65536 00:15:02.149 } 00:15:02.149 ] 00:15:02.149 }' 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.149 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.408 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.408 03:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.976 [2024-11-05 03:26:16.368809] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:02.976 [2024-11-05 03:26:16.368914] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:02.976 [2024-11-05 03:26:16.369004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.235 03:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.495 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.495 "name": "raid_bdev1", 00:15:03.495 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:15:03.495 "strip_size_kb": 0, 00:15:03.495 "state": "online", 00:15:03.495 "raid_level": "raid1", 00:15:03.495 "superblock": false, 00:15:03.495 "num_base_bdevs": 4, 00:15:03.495 "num_base_bdevs_discovered": 3, 00:15:03.495 "num_base_bdevs_operational": 3, 00:15:03.495 "base_bdevs_list": [ 00:15:03.495 { 00:15:03.495 "name": "spare", 00:15:03.495 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:15:03.495 "is_configured": true, 00:15:03.495 "data_offset": 0, 00:15:03.495 "data_size": 65536 00:15:03.495 }, 00:15:03.495 { 00:15:03.495 "name": null, 00:15:03.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.495 "is_configured": false, 00:15:03.495 "data_offset": 0, 00:15:03.495 "data_size": 65536 00:15:03.495 }, 00:15:03.495 { 00:15:03.495 "name": "BaseBdev3", 00:15:03.495 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:15:03.495 "is_configured": true, 00:15:03.495 "data_offset": 0, 00:15:03.495 "data_size": 65536 00:15:03.495 }, 00:15:03.495 { 00:15:03.495 "name": "BaseBdev4", 00:15:03.495 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:15:03.495 "is_configured": true, 00:15:03.495 "data_offset": 0, 00:15:03.495 "data_size": 65536 00:15:03.495 } 00:15:03.495 ] 00:15:03.495 }' 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.496 03:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.496 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.496 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.496 "name": "raid_bdev1", 00:15:03.496 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:15:03.496 "strip_size_kb": 0, 00:15:03.496 "state": "online", 00:15:03.496 "raid_level": "raid1", 00:15:03.496 "superblock": false, 00:15:03.496 "num_base_bdevs": 4, 00:15:03.496 "num_base_bdevs_discovered": 3, 00:15:03.496 "num_base_bdevs_operational": 3, 00:15:03.496 "base_bdevs_list": [ 00:15:03.496 { 00:15:03.496 "name": "spare", 00:15:03.496 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:15:03.496 "is_configured": true, 00:15:03.496 "data_offset": 0, 00:15:03.496 "data_size": 65536 00:15:03.496 }, 00:15:03.496 { 00:15:03.496 "name": null, 00:15:03.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.496 "is_configured": false, 00:15:03.496 "data_offset": 0, 00:15:03.496 "data_size": 65536 00:15:03.496 }, 00:15:03.496 { 00:15:03.496 "name": "BaseBdev3", 00:15:03.496 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:15:03.496 "is_configured": true, 00:15:03.496 "data_offset": 0, 00:15:03.496 "data_size": 65536 00:15:03.496 }, 00:15:03.496 { 00:15:03.496 "name": "BaseBdev4", 00:15:03.496 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:15:03.496 "is_configured": true, 00:15:03.496 "data_offset": 0, 00:15:03.496 "data_size": 65536 00:15:03.496 } 00:15:03.496 ] 00:15:03.496 }' 00:15:03.496 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.496 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.496 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.755 "name": "raid_bdev1", 00:15:03.755 "uuid": "cee7da40-11f4-45f9-991d-a5c1cc56cf7c", 00:15:03.755 "strip_size_kb": 0, 00:15:03.755 "state": "online", 00:15:03.755 "raid_level": "raid1", 00:15:03.755 "superblock": false, 00:15:03.755 "num_base_bdevs": 4, 00:15:03.755 "num_base_bdevs_discovered": 3, 00:15:03.755 "num_base_bdevs_operational": 3, 00:15:03.755 "base_bdevs_list": [ 00:15:03.755 { 00:15:03.755 "name": "spare", 00:15:03.755 "uuid": "765f8cd0-60b9-5f07-a5cd-9fac2cef6367", 00:15:03.755 "is_configured": true, 00:15:03.755 "data_offset": 0, 00:15:03.755 "data_size": 65536 00:15:03.755 }, 00:15:03.755 { 00:15:03.755 "name": null, 00:15:03.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.755 "is_configured": false, 00:15:03.755 "data_offset": 0, 00:15:03.755 "data_size": 65536 00:15:03.755 }, 00:15:03.755 { 00:15:03.755 "name": "BaseBdev3", 00:15:03.755 "uuid": "0903332d-cac3-54dd-8b41-9c301ff9d349", 00:15:03.755 "is_configured": true, 00:15:03.755 "data_offset": 0, 00:15:03.755 "data_size": 65536 00:15:03.755 }, 00:15:03.755 { 00:15:03.755 "name": "BaseBdev4", 00:15:03.755 "uuid": "0837df91-d278-5305-8495-66d410530c5a", 00:15:03.755 "is_configured": true, 00:15:03.755 "data_offset": 0, 00:15:03.755 "data_size": 65536 00:15:03.755 } 00:15:03.755 ] 00:15:03.755 }' 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.755 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.406 [2024-11-05 03:26:17.671722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.406 [2024-11-05 03:26:17.671756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.406 [2024-11-05 03:26:17.671837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.406 [2024-11-05 03:26:17.671927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.406 [2024-11-05 03:26:17.671941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:04.406 03:26:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:04.406 /dev/nbd0 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.688 1+0 records in 00:15:04.688 1+0 records out 00:15:04.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029587 s, 13.8 MB/s 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:04.688 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:04.947 /dev/nbd1 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.948 1+0 records in 00:15:04.948 1+0 records out 00:15:04.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433717 s, 9.4 MB/s 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.948 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.515 03:26:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:05.774 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77551 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77551 ']' 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77551 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77551 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:05.775 killing process with pid 77551 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77551' 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77551 00:15:05.775 Received shutdown signal, test time was about 60.000000 seconds 00:15:05.775 00:15:05.775 Latency(us) 00:15:05.775 [2024-11-05T03:26:19.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.775 [2024-11-05T03:26:19.414Z] =================================================================================================================== 00:15:05.775 [2024-11-05T03:26:19.414Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:05.775 [2024-11-05 03:26:19.244207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.775 03:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77551 00:15:06.034 [2024-11-05 03:26:19.654043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:06.970 00:15:06.970 real 0m20.089s 00:15:06.970 user 0m23.015s 00:15:06.970 sys 0m3.468s 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.970 ************************************ 00:15:06.970 END TEST raid_rebuild_test 00:15:06.970 ************************************ 00:15:06.970 03:26:20 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:06.970 03:26:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:06.970 03:26:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:06.970 03:26:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.970 ************************************ 00:15:06.970 START TEST raid_rebuild_test_sb 00:15:06.970 ************************************ 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78019 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78019 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78019 ']' 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:06.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.970 03:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.971 03:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:06.971 03:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.230 [2024-11-05 03:26:20.699411] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:15:07.230 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:07.230 Zero copy mechanism will not be used. 00:15:07.230 [2024-11-05 03:26:20.699591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78019 ] 00:15:07.489 [2024-11-05 03:26:20.885198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.489 [2024-11-05 03:26:21.005841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.747 [2024-11-05 03:26:21.178457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.747 [2024-11-05 03:26:21.178539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.315 BaseBdev1_malloc 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.315 [2024-11-05 03:26:21.757497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.315 [2024-11-05 03:26:21.757583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.315 [2024-11-05 03:26:21.757616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.315 [2024-11-05 03:26:21.757636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.315 [2024-11-05 03:26:21.760579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.315 [2024-11-05 03:26:21.760633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.315 BaseBdev1 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.315 BaseBdev2_malloc 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.315 [2024-11-05 03:26:21.811921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:08.315 [2024-11-05 03:26:21.812010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.315 [2024-11-05 03:26:21.812037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.315 [2024-11-05 03:26:21.812056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.315 [2024-11-05 03:26:21.814747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.315 [2024-11-05 03:26:21.814809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.315 BaseBdev2 00:15:08.315 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.316 BaseBdev3_malloc 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.316 [2024-11-05 03:26:21.874939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:08.316 [2024-11-05 03:26:21.875046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.316 [2024-11-05 03:26:21.875075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.316 [2024-11-05 03:26:21.875093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.316 [2024-11-05 03:26:21.877810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.316 [2024-11-05 03:26:21.877894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:08.316 BaseBdev3 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.316 BaseBdev4_malloc 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.316 [2024-11-05 03:26:21.919271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:08.316 [2024-11-05 03:26:21.919377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.316 [2024-11-05 03:26:21.919404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:08.316 [2024-11-05 03:26:21.919425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.316 [2024-11-05 03:26:21.922035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.316 [2024-11-05 03:26:21.922145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:08.316 BaseBdev4 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.316 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.574 spare_malloc 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.574 spare_delay 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.574 [2024-11-05 03:26:21.976254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.574 [2024-11-05 03:26:21.976369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.574 [2024-11-05 03:26:21.976401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:08.574 [2024-11-05 03:26:21.976419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.574 [2024-11-05 03:26:21.979384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.574 [2024-11-05 03:26:21.979447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.574 spare 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.574 [2024-11-05 03:26:21.984347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.574 [2024-11-05 03:26:21.986984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.574 [2024-11-05 03:26:21.987091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.574 [2024-11-05 03:26:21.987164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.574 [2024-11-05 03:26:21.987452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:08.574 [2024-11-05 03:26:21.987507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:08.574 [2024-11-05 03:26:21.987892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:08.574 [2024-11-05 03:26:21.988153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:08.574 [2024-11-05 03:26:21.988182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:08.574 [2024-11-05 03:26:21.988438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.574 03:26:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.574 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.574 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.574 "name": "raid_bdev1", 00:15:08.574 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:08.574 "strip_size_kb": 0, 00:15:08.574 "state": "online", 00:15:08.574 "raid_level": "raid1", 00:15:08.574 "superblock": true, 00:15:08.574 "num_base_bdevs": 4, 00:15:08.574 "num_base_bdevs_discovered": 4, 00:15:08.574 "num_base_bdevs_operational": 4, 00:15:08.574 "base_bdevs_list": [ 00:15:08.574 { 00:15:08.574 "name": "BaseBdev1", 00:15:08.574 "uuid": "4b9a1a5f-d53a-59c8-a50b-f0e8f1a6614c", 00:15:08.574 "is_configured": true, 00:15:08.574 "data_offset": 2048, 00:15:08.574 "data_size": 63488 00:15:08.574 }, 00:15:08.574 { 00:15:08.574 "name": "BaseBdev2", 00:15:08.574 "uuid": "e6fc0621-9dc8-5653-a76d-23ebc51f53d5", 00:15:08.574 "is_configured": true, 00:15:08.574 "data_offset": 2048, 00:15:08.574 "data_size": 63488 00:15:08.574 }, 00:15:08.574 { 00:15:08.574 "name": "BaseBdev3", 00:15:08.574 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:08.574 "is_configured": true, 00:15:08.574 "data_offset": 2048, 00:15:08.574 "data_size": 63488 00:15:08.574 }, 00:15:08.574 { 00:15:08.574 "name": "BaseBdev4", 00:15:08.574 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:08.574 "is_configured": true, 00:15:08.574 "data_offset": 2048, 00:15:08.574 "data_size": 63488 00:15:08.574 } 00:15:08.574 ] 00:15:08.574 }' 00:15:08.574 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.574 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.141 [2024-11-05 03:26:22.513032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:09.141 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.142 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:09.142 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.142 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:09.142 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.142 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.142 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:09.400 [2024-11-05 03:26:22.896759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:09.400 /dev/nbd0 00:15:09.400 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.400 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.400 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:09.400 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:09.400 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:09.400 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:09.400 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.401 1+0 records in 00:15:09.401 1+0 records out 00:15:09.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335813 s, 12.2 MB/s 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:09.401 03:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:17.582 63488+0 records in 00:15:17.582 63488+0 records out 00:15:17.582 32505856 bytes (33 MB, 31 MiB) copied, 7.4268 s, 4.4 MB/s 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.582 [2024-11-05 03:26:30.601529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.582 [2024-11-05 03:26:30.633585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.582 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.583 03:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.583 03:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.583 03:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.583 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.583 "name": "raid_bdev1", 00:15:17.583 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:17.583 "strip_size_kb": 0, 00:15:17.583 "state": "online", 00:15:17.583 "raid_level": "raid1", 00:15:17.583 "superblock": true, 00:15:17.583 "num_base_bdevs": 4, 00:15:17.583 "num_base_bdevs_discovered": 3, 00:15:17.583 "num_base_bdevs_operational": 3, 00:15:17.583 "base_bdevs_list": [ 00:15:17.583 { 00:15:17.583 "name": null, 00:15:17.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.583 "is_configured": false, 00:15:17.583 "data_offset": 0, 00:15:17.583 "data_size": 63488 00:15:17.583 }, 00:15:17.583 { 00:15:17.583 "name": "BaseBdev2", 00:15:17.583 "uuid": "e6fc0621-9dc8-5653-a76d-23ebc51f53d5", 00:15:17.583 "is_configured": true, 00:15:17.583 "data_offset": 2048, 00:15:17.583 "data_size": 63488 00:15:17.583 }, 00:15:17.583 { 00:15:17.583 "name": "BaseBdev3", 00:15:17.583 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:17.583 "is_configured": true, 00:15:17.583 "data_offset": 2048, 00:15:17.583 "data_size": 63488 00:15:17.583 }, 00:15:17.583 { 00:15:17.583 "name": "BaseBdev4", 00:15:17.583 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:17.583 "is_configured": true, 00:15:17.583 "data_offset": 2048, 00:15:17.583 "data_size": 63488 00:15:17.583 } 00:15:17.583 ] 00:15:17.583 }' 00:15:17.583 03:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.583 03:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.583 03:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.583 03:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.583 03:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.583 [2024-11-05 03:26:31.189747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.583 [2024-11-05 03:26:31.204206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:17.583 03:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.583 03:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:17.583 [2024-11-05 03:26:31.206805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.958 "name": "raid_bdev1", 00:15:18.958 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:18.958 "strip_size_kb": 0, 00:15:18.958 "state": "online", 00:15:18.958 "raid_level": "raid1", 00:15:18.958 "superblock": true, 00:15:18.958 "num_base_bdevs": 4, 00:15:18.958 "num_base_bdevs_discovered": 4, 00:15:18.958 "num_base_bdevs_operational": 4, 00:15:18.958 "process": { 00:15:18.958 "type": "rebuild", 00:15:18.958 "target": "spare", 00:15:18.958 "progress": { 00:15:18.958 "blocks": 20480, 00:15:18.958 "percent": 32 00:15:18.958 } 00:15:18.958 }, 00:15:18.958 "base_bdevs_list": [ 00:15:18.958 { 00:15:18.958 "name": "spare", 00:15:18.958 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:18.958 "is_configured": true, 00:15:18.958 "data_offset": 2048, 00:15:18.958 "data_size": 63488 00:15:18.958 }, 00:15:18.958 { 00:15:18.958 "name": "BaseBdev2", 00:15:18.958 "uuid": "e6fc0621-9dc8-5653-a76d-23ebc51f53d5", 00:15:18.958 "is_configured": true, 00:15:18.958 "data_offset": 2048, 00:15:18.958 "data_size": 63488 00:15:18.958 }, 00:15:18.958 { 00:15:18.958 "name": "BaseBdev3", 00:15:18.958 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:18.958 "is_configured": true, 00:15:18.958 "data_offset": 2048, 00:15:18.958 "data_size": 63488 00:15:18.958 }, 00:15:18.958 { 00:15:18.958 "name": "BaseBdev4", 00:15:18.958 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:18.958 "is_configured": true, 00:15:18.958 "data_offset": 2048, 00:15:18.958 "data_size": 63488 00:15:18.958 } 00:15:18.958 ] 00:15:18.958 }' 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.958 [2024-11-05 03:26:32.372184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.958 [2024-11-05 03:26:32.415445] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:18.958 [2024-11-05 03:26:32.415558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.958 [2024-11-05 03:26:32.415583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.958 [2024-11-05 03:26:32.415598] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.958 "name": "raid_bdev1", 00:15:18.958 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:18.958 "strip_size_kb": 0, 00:15:18.958 "state": "online", 00:15:18.958 "raid_level": "raid1", 00:15:18.958 "superblock": true, 00:15:18.958 "num_base_bdevs": 4, 00:15:18.958 "num_base_bdevs_discovered": 3, 00:15:18.958 "num_base_bdevs_operational": 3, 00:15:18.958 "base_bdevs_list": [ 00:15:18.958 { 00:15:18.958 "name": null, 00:15:18.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.958 "is_configured": false, 00:15:18.958 "data_offset": 0, 00:15:18.958 "data_size": 63488 00:15:18.958 }, 00:15:18.958 { 00:15:18.958 "name": "BaseBdev2", 00:15:18.958 "uuid": "e6fc0621-9dc8-5653-a76d-23ebc51f53d5", 00:15:18.958 "is_configured": true, 00:15:18.958 "data_offset": 2048, 00:15:18.958 "data_size": 63488 00:15:18.958 }, 00:15:18.958 { 00:15:18.958 "name": "BaseBdev3", 00:15:18.958 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:18.958 "is_configured": true, 00:15:18.958 "data_offset": 2048, 00:15:18.958 "data_size": 63488 00:15:18.958 }, 00:15:18.958 { 00:15:18.958 "name": "BaseBdev4", 00:15:18.958 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:18.958 "is_configured": true, 00:15:18.958 "data_offset": 2048, 00:15:18.958 "data_size": 63488 00:15:18.958 } 00:15:18.958 ] 00:15:18.958 }' 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.958 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 03:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.525 "name": "raid_bdev1", 00:15:19.525 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:19.525 "strip_size_kb": 0, 00:15:19.525 "state": "online", 00:15:19.525 "raid_level": "raid1", 00:15:19.525 "superblock": true, 00:15:19.525 "num_base_bdevs": 4, 00:15:19.525 "num_base_bdevs_discovered": 3, 00:15:19.525 "num_base_bdevs_operational": 3, 00:15:19.525 "base_bdevs_list": [ 00:15:19.525 { 00:15:19.525 "name": null, 00:15:19.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.525 "is_configured": false, 00:15:19.525 "data_offset": 0, 00:15:19.525 "data_size": 63488 00:15:19.525 }, 00:15:19.525 { 00:15:19.525 "name": "BaseBdev2", 00:15:19.525 "uuid": "e6fc0621-9dc8-5653-a76d-23ebc51f53d5", 00:15:19.525 "is_configured": true, 00:15:19.525 "data_offset": 2048, 00:15:19.525 "data_size": 63488 00:15:19.525 }, 00:15:19.525 { 00:15:19.525 "name": "BaseBdev3", 00:15:19.525 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:19.525 "is_configured": true, 00:15:19.525 "data_offset": 2048, 00:15:19.525 "data_size": 63488 00:15:19.525 }, 00:15:19.525 { 00:15:19.525 "name": "BaseBdev4", 00:15:19.525 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:19.525 "is_configured": true, 00:15:19.525 "data_offset": 2048, 00:15:19.525 "data_size": 63488 00:15:19.525 } 00:15:19.525 ] 00:15:19.525 }' 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 [2024-11-05 03:26:33.137187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.525 [2024-11-05 03:26:33.149922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.525 03:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:19.525 [2024-11-05 03:26:33.152666] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.901 "name": "raid_bdev1", 00:15:20.901 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:20.901 "strip_size_kb": 0, 00:15:20.901 "state": "online", 00:15:20.901 "raid_level": "raid1", 00:15:20.901 "superblock": true, 00:15:20.901 "num_base_bdevs": 4, 00:15:20.901 "num_base_bdevs_discovered": 4, 00:15:20.901 "num_base_bdevs_operational": 4, 00:15:20.901 "process": { 00:15:20.901 "type": "rebuild", 00:15:20.901 "target": "spare", 00:15:20.901 "progress": { 00:15:20.901 "blocks": 20480, 00:15:20.901 "percent": 32 00:15:20.901 } 00:15:20.901 }, 00:15:20.901 "base_bdevs_list": [ 00:15:20.901 { 00:15:20.901 "name": "spare", 00:15:20.901 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:20.901 "is_configured": true, 00:15:20.901 "data_offset": 2048, 00:15:20.901 "data_size": 63488 00:15:20.901 }, 00:15:20.901 { 00:15:20.901 "name": "BaseBdev2", 00:15:20.901 "uuid": "e6fc0621-9dc8-5653-a76d-23ebc51f53d5", 00:15:20.901 "is_configured": true, 00:15:20.901 "data_offset": 2048, 00:15:20.901 "data_size": 63488 00:15:20.901 }, 00:15:20.901 { 00:15:20.901 "name": "BaseBdev3", 00:15:20.901 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:20.901 "is_configured": true, 00:15:20.901 "data_offset": 2048, 00:15:20.901 "data_size": 63488 00:15:20.901 }, 00:15:20.901 { 00:15:20.901 "name": "BaseBdev4", 00:15:20.901 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:20.901 "is_configured": true, 00:15:20.901 "data_offset": 2048, 00:15:20.901 "data_size": 63488 00:15:20.901 } 00:15:20.901 ] 00:15:20.901 }' 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.901 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:20.902 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.902 [2024-11-05 03:26:34.321707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.902 [2024-11-05 03:26:34.461855] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.902 "name": "raid_bdev1", 00:15:20.902 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:20.902 "strip_size_kb": 0, 00:15:20.902 "state": "online", 00:15:20.902 "raid_level": "raid1", 00:15:20.902 "superblock": true, 00:15:20.902 "num_base_bdevs": 4, 00:15:20.902 "num_base_bdevs_discovered": 3, 00:15:20.902 "num_base_bdevs_operational": 3, 00:15:20.902 "process": { 00:15:20.902 "type": "rebuild", 00:15:20.902 "target": "spare", 00:15:20.902 "progress": { 00:15:20.902 "blocks": 24576, 00:15:20.902 "percent": 38 00:15:20.902 } 00:15:20.902 }, 00:15:20.902 "base_bdevs_list": [ 00:15:20.902 { 00:15:20.902 "name": "spare", 00:15:20.902 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:20.902 "is_configured": true, 00:15:20.902 "data_offset": 2048, 00:15:20.902 "data_size": 63488 00:15:20.902 }, 00:15:20.902 { 00:15:20.902 "name": null, 00:15:20.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.902 "is_configured": false, 00:15:20.902 "data_offset": 0, 00:15:20.902 "data_size": 63488 00:15:20.902 }, 00:15:20.902 { 00:15:20.902 "name": "BaseBdev3", 00:15:20.902 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:20.902 "is_configured": true, 00:15:20.902 "data_offset": 2048, 00:15:20.902 "data_size": 63488 00:15:20.902 }, 00:15:20.902 { 00:15:20.902 "name": "BaseBdev4", 00:15:20.902 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:20.902 "is_configured": true, 00:15:20.902 "data_offset": 2048, 00:15:20.902 "data_size": 63488 00:15:20.902 } 00:15:20.902 ] 00:15:20.902 }' 00:15:20.902 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=496 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.160 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.161 "name": "raid_bdev1", 00:15:21.161 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:21.161 "strip_size_kb": 0, 00:15:21.161 "state": "online", 00:15:21.161 "raid_level": "raid1", 00:15:21.161 "superblock": true, 00:15:21.161 "num_base_bdevs": 4, 00:15:21.161 "num_base_bdevs_discovered": 3, 00:15:21.161 "num_base_bdevs_operational": 3, 00:15:21.161 "process": { 00:15:21.161 "type": "rebuild", 00:15:21.161 "target": "spare", 00:15:21.161 "progress": { 00:15:21.161 "blocks": 26624, 00:15:21.161 "percent": 41 00:15:21.161 } 00:15:21.161 }, 00:15:21.161 "base_bdevs_list": [ 00:15:21.161 { 00:15:21.161 "name": "spare", 00:15:21.161 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:21.161 "is_configured": true, 00:15:21.161 "data_offset": 2048, 00:15:21.161 "data_size": 63488 00:15:21.161 }, 00:15:21.161 { 00:15:21.161 "name": null, 00:15:21.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.161 "is_configured": false, 00:15:21.161 "data_offset": 0, 00:15:21.161 "data_size": 63488 00:15:21.161 }, 00:15:21.161 { 00:15:21.161 "name": "BaseBdev3", 00:15:21.161 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:21.161 "is_configured": true, 00:15:21.161 "data_offset": 2048, 00:15:21.161 "data_size": 63488 00:15:21.161 }, 00:15:21.161 { 00:15:21.161 "name": "BaseBdev4", 00:15:21.161 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:21.161 "is_configured": true, 00:15:21.161 "data_offset": 2048, 00:15:21.161 "data_size": 63488 00:15:21.161 } 00:15:21.161 ] 00:15:21.161 }' 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.161 03:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.546 "name": "raid_bdev1", 00:15:22.546 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:22.546 "strip_size_kb": 0, 00:15:22.546 "state": "online", 00:15:22.546 "raid_level": "raid1", 00:15:22.546 "superblock": true, 00:15:22.546 "num_base_bdevs": 4, 00:15:22.546 "num_base_bdevs_discovered": 3, 00:15:22.546 "num_base_bdevs_operational": 3, 00:15:22.546 "process": { 00:15:22.546 "type": "rebuild", 00:15:22.546 "target": "spare", 00:15:22.546 "progress": { 00:15:22.546 "blocks": 51200, 00:15:22.546 "percent": 80 00:15:22.546 } 00:15:22.546 }, 00:15:22.546 "base_bdevs_list": [ 00:15:22.546 { 00:15:22.546 "name": "spare", 00:15:22.546 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:22.546 "is_configured": true, 00:15:22.546 "data_offset": 2048, 00:15:22.546 "data_size": 63488 00:15:22.546 }, 00:15:22.546 { 00:15:22.546 "name": null, 00:15:22.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.546 "is_configured": false, 00:15:22.546 "data_offset": 0, 00:15:22.546 "data_size": 63488 00:15:22.546 }, 00:15:22.546 { 00:15:22.546 "name": "BaseBdev3", 00:15:22.546 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:22.546 "is_configured": true, 00:15:22.546 "data_offset": 2048, 00:15:22.546 "data_size": 63488 00:15:22.546 }, 00:15:22.546 { 00:15:22.546 "name": "BaseBdev4", 00:15:22.546 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:22.546 "is_configured": true, 00:15:22.546 "data_offset": 2048, 00:15:22.546 "data_size": 63488 00:15:22.546 } 00:15:22.546 ] 00:15:22.546 }' 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.546 03:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.804 [2024-11-05 03:26:36.376157] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:22.804 [2024-11-05 03:26:36.376287] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:22.804 [2024-11-05 03:26:36.376479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.371 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.372 03:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.631 "name": "raid_bdev1", 00:15:23.631 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:23.631 "strip_size_kb": 0, 00:15:23.631 "state": "online", 00:15:23.631 "raid_level": "raid1", 00:15:23.631 "superblock": true, 00:15:23.631 "num_base_bdevs": 4, 00:15:23.631 "num_base_bdevs_discovered": 3, 00:15:23.631 "num_base_bdevs_operational": 3, 00:15:23.631 "base_bdevs_list": [ 00:15:23.631 { 00:15:23.631 "name": "spare", 00:15:23.631 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:23.631 "is_configured": true, 00:15:23.631 "data_offset": 2048, 00:15:23.631 "data_size": 63488 00:15:23.631 }, 00:15:23.631 { 00:15:23.631 "name": null, 00:15:23.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.631 "is_configured": false, 00:15:23.631 "data_offset": 0, 00:15:23.631 "data_size": 63488 00:15:23.631 }, 00:15:23.631 { 00:15:23.631 "name": "BaseBdev3", 00:15:23.631 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:23.631 "is_configured": true, 00:15:23.631 "data_offset": 2048, 00:15:23.631 "data_size": 63488 00:15:23.631 }, 00:15:23.631 { 00:15:23.631 "name": "BaseBdev4", 00:15:23.631 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:23.631 "is_configured": true, 00:15:23.631 "data_offset": 2048, 00:15:23.631 "data_size": 63488 00:15:23.631 } 00:15:23.631 ] 00:15:23.631 }' 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.631 "name": "raid_bdev1", 00:15:23.631 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:23.631 "strip_size_kb": 0, 00:15:23.631 "state": "online", 00:15:23.631 "raid_level": "raid1", 00:15:23.631 "superblock": true, 00:15:23.631 "num_base_bdevs": 4, 00:15:23.631 "num_base_bdevs_discovered": 3, 00:15:23.631 "num_base_bdevs_operational": 3, 00:15:23.631 "base_bdevs_list": [ 00:15:23.631 { 00:15:23.631 "name": "spare", 00:15:23.631 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:23.631 "is_configured": true, 00:15:23.631 "data_offset": 2048, 00:15:23.631 "data_size": 63488 00:15:23.631 }, 00:15:23.631 { 00:15:23.631 "name": null, 00:15:23.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.631 "is_configured": false, 00:15:23.631 "data_offset": 0, 00:15:23.631 "data_size": 63488 00:15:23.631 }, 00:15:23.631 { 00:15:23.631 "name": "BaseBdev3", 00:15:23.631 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:23.631 "is_configured": true, 00:15:23.631 "data_offset": 2048, 00:15:23.631 "data_size": 63488 00:15:23.631 }, 00:15:23.631 { 00:15:23.631 "name": "BaseBdev4", 00:15:23.631 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:23.631 "is_configured": true, 00:15:23.631 "data_offset": 2048, 00:15:23.631 "data_size": 63488 00:15:23.631 } 00:15:23.631 ] 00:15:23.631 }' 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.631 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.632 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.891 "name": "raid_bdev1", 00:15:23.891 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:23.891 "strip_size_kb": 0, 00:15:23.891 "state": "online", 00:15:23.891 "raid_level": "raid1", 00:15:23.891 "superblock": true, 00:15:23.891 "num_base_bdevs": 4, 00:15:23.891 "num_base_bdevs_discovered": 3, 00:15:23.891 "num_base_bdevs_operational": 3, 00:15:23.891 "base_bdevs_list": [ 00:15:23.891 { 00:15:23.891 "name": "spare", 00:15:23.891 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:23.891 "is_configured": true, 00:15:23.891 "data_offset": 2048, 00:15:23.891 "data_size": 63488 00:15:23.891 }, 00:15:23.891 { 00:15:23.891 "name": null, 00:15:23.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.891 "is_configured": false, 00:15:23.891 "data_offset": 0, 00:15:23.891 "data_size": 63488 00:15:23.891 }, 00:15:23.891 { 00:15:23.891 "name": "BaseBdev3", 00:15:23.891 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:23.891 "is_configured": true, 00:15:23.891 "data_offset": 2048, 00:15:23.891 "data_size": 63488 00:15:23.891 }, 00:15:23.891 { 00:15:23.891 "name": "BaseBdev4", 00:15:23.891 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:23.891 "is_configured": true, 00:15:23.891 "data_offset": 2048, 00:15:23.891 "data_size": 63488 00:15:23.891 } 00:15:23.891 ] 00:15:23.891 }' 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.891 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.460 [2024-11-05 03:26:37.815922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.460 [2024-11-05 03:26:37.815977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.460 [2024-11-05 03:26:37.816071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.460 [2024-11-05 03:26:37.816164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.460 [2024-11-05 03:26:37.816180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:24.460 03:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:24.719 /dev/nbd0 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.719 1+0 records in 00:15:24.719 1+0 records out 00:15:24.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327337 s, 12.5 MB/s 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:24.719 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:24.979 /dev/nbd1 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.979 1+0 records in 00:15:24.979 1+0 records out 00:15:24.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389565 s, 10.5 MB/s 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:24.979 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:25.238 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:25.238 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.238 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.238 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.238 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:25.238 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.238 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.497 03:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.756 [2024-11-05 03:26:39.326895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:25.756 [2024-11-05 03:26:39.326984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.756 [2024-11-05 03:26:39.327015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:25.756 [2024-11-05 03:26:39.327029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.756 [2024-11-05 03:26:39.329887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.756 [2024-11-05 03:26:39.329954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:25.756 [2024-11-05 03:26:39.330103] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:25.756 [2024-11-05 03:26:39.330169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:25.756 [2024-11-05 03:26:39.330427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.756 [2024-11-05 03:26:39.330587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:25.756 spare 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.756 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.015 [2024-11-05 03:26:39.430722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:26.015 [2024-11-05 03:26:39.430780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:26.015 [2024-11-05 03:26:39.431250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:26.015 [2024-11-05 03:26:39.431534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:26.015 [2024-11-05 03:26:39.431577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:26.015 [2024-11-05 03:26:39.431829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.015 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.015 "name": "raid_bdev1", 00:15:26.015 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:26.015 "strip_size_kb": 0, 00:15:26.015 "state": "online", 00:15:26.015 "raid_level": "raid1", 00:15:26.015 "superblock": true, 00:15:26.015 "num_base_bdevs": 4, 00:15:26.015 "num_base_bdevs_discovered": 3, 00:15:26.015 "num_base_bdevs_operational": 3, 00:15:26.015 "base_bdevs_list": [ 00:15:26.015 { 00:15:26.015 "name": "spare", 00:15:26.015 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:26.015 "is_configured": true, 00:15:26.015 "data_offset": 2048, 00:15:26.015 "data_size": 63488 00:15:26.015 }, 00:15:26.015 { 00:15:26.015 "name": null, 00:15:26.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.015 "is_configured": false, 00:15:26.015 "data_offset": 2048, 00:15:26.015 "data_size": 63488 00:15:26.015 }, 00:15:26.015 { 00:15:26.015 "name": "BaseBdev3", 00:15:26.015 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:26.015 "is_configured": true, 00:15:26.015 "data_offset": 2048, 00:15:26.016 "data_size": 63488 00:15:26.016 }, 00:15:26.016 { 00:15:26.016 "name": "BaseBdev4", 00:15:26.016 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:26.016 "is_configured": true, 00:15:26.016 "data_offset": 2048, 00:15:26.016 "data_size": 63488 00:15:26.016 } 00:15:26.016 ] 00:15:26.016 }' 00:15:26.016 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.016 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.583 "name": "raid_bdev1", 00:15:26.583 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:26.583 "strip_size_kb": 0, 00:15:26.583 "state": "online", 00:15:26.583 "raid_level": "raid1", 00:15:26.583 "superblock": true, 00:15:26.583 "num_base_bdevs": 4, 00:15:26.583 "num_base_bdevs_discovered": 3, 00:15:26.583 "num_base_bdevs_operational": 3, 00:15:26.583 "base_bdevs_list": [ 00:15:26.583 { 00:15:26.583 "name": "spare", 00:15:26.583 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": null, 00:15:26.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.583 "is_configured": false, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": "BaseBdev3", 00:15:26.583 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": "BaseBdev4", 00:15:26.583 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 } 00:15:26.583 ] 00:15:26.583 }' 00:15:26.583 03:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 [2024-11-05 03:26:40.156078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.583 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.583 "name": "raid_bdev1", 00:15:26.583 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:26.583 "strip_size_kb": 0, 00:15:26.583 "state": "online", 00:15:26.583 "raid_level": "raid1", 00:15:26.583 "superblock": true, 00:15:26.583 "num_base_bdevs": 4, 00:15:26.583 "num_base_bdevs_discovered": 2, 00:15:26.583 "num_base_bdevs_operational": 2, 00:15:26.583 "base_bdevs_list": [ 00:15:26.583 { 00:15:26.583 "name": null, 00:15:26.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.583 "is_configured": false, 00:15:26.583 "data_offset": 0, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": null, 00:15:26.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.583 "is_configured": false, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.584 "name": "BaseBdev3", 00:15:26.584 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:26.584 "is_configured": true, 00:15:26.584 "data_offset": 2048, 00:15:26.584 "data_size": 63488 00:15:26.584 }, 00:15:26.584 { 00:15:26.584 "name": "BaseBdev4", 00:15:26.584 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:26.584 "is_configured": true, 00:15:26.584 "data_offset": 2048, 00:15:26.584 "data_size": 63488 00:15:26.584 } 00:15:26.584 ] 00:15:26.584 }' 00:15:26.584 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.584 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.151 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:27.151 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.151 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.151 [2024-11-05 03:26:40.716228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.151 [2024-11-05 03:26:40.716526] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:27.151 [2024-11-05 03:26:40.716586] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:27.151 [2024-11-05 03:26:40.716644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.151 [2024-11-05 03:26:40.729040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:27.151 03:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.151 03:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:27.151 [2024-11-05 03:26:40.731878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.108 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.367 "name": "raid_bdev1", 00:15:28.367 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:28.367 "strip_size_kb": 0, 00:15:28.367 "state": "online", 00:15:28.367 "raid_level": "raid1", 00:15:28.367 "superblock": true, 00:15:28.367 "num_base_bdevs": 4, 00:15:28.367 "num_base_bdevs_discovered": 3, 00:15:28.367 "num_base_bdevs_operational": 3, 00:15:28.367 "process": { 00:15:28.367 "type": "rebuild", 00:15:28.367 "target": "spare", 00:15:28.367 "progress": { 00:15:28.367 "blocks": 20480, 00:15:28.367 "percent": 32 00:15:28.367 } 00:15:28.367 }, 00:15:28.367 "base_bdevs_list": [ 00:15:28.367 { 00:15:28.367 "name": "spare", 00:15:28.367 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:28.367 "is_configured": true, 00:15:28.367 "data_offset": 2048, 00:15:28.367 "data_size": 63488 00:15:28.367 }, 00:15:28.367 { 00:15:28.367 "name": null, 00:15:28.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.367 "is_configured": false, 00:15:28.367 "data_offset": 2048, 00:15:28.367 "data_size": 63488 00:15:28.367 }, 00:15:28.367 { 00:15:28.367 "name": "BaseBdev3", 00:15:28.367 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:28.367 "is_configured": true, 00:15:28.367 "data_offset": 2048, 00:15:28.367 "data_size": 63488 00:15:28.367 }, 00:15:28.367 { 00:15:28.367 "name": "BaseBdev4", 00:15:28.367 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:28.367 "is_configured": true, 00:15:28.367 "data_offset": 2048, 00:15:28.367 "data_size": 63488 00:15:28.367 } 00:15:28.367 ] 00:15:28.367 }' 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.367 [2024-11-05 03:26:41.900977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.367 [2024-11-05 03:26:41.940728] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:28.367 [2024-11-05 03:26:41.940833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.367 [2024-11-05 03:26:41.940859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.367 [2024-11-05 03:26:41.940870] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.367 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.368 03:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.368 03:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.626 03:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.626 "name": "raid_bdev1", 00:15:28.626 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:28.626 "strip_size_kb": 0, 00:15:28.626 "state": "online", 00:15:28.626 "raid_level": "raid1", 00:15:28.626 "superblock": true, 00:15:28.626 "num_base_bdevs": 4, 00:15:28.626 "num_base_bdevs_discovered": 2, 00:15:28.626 "num_base_bdevs_operational": 2, 00:15:28.626 "base_bdevs_list": [ 00:15:28.626 { 00:15:28.626 "name": null, 00:15:28.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.626 "is_configured": false, 00:15:28.626 "data_offset": 0, 00:15:28.626 "data_size": 63488 00:15:28.626 }, 00:15:28.626 { 00:15:28.626 "name": null, 00:15:28.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.626 "is_configured": false, 00:15:28.626 "data_offset": 2048, 00:15:28.626 "data_size": 63488 00:15:28.626 }, 00:15:28.626 { 00:15:28.626 "name": "BaseBdev3", 00:15:28.626 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:28.626 "is_configured": true, 00:15:28.626 "data_offset": 2048, 00:15:28.626 "data_size": 63488 00:15:28.626 }, 00:15:28.626 { 00:15:28.626 "name": "BaseBdev4", 00:15:28.626 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:28.626 "is_configured": true, 00:15:28.626 "data_offset": 2048, 00:15:28.626 "data_size": 63488 00:15:28.626 } 00:15:28.626 ] 00:15:28.626 }' 00:15:28.626 03:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.626 03:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.884 03:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:28.884 03:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.884 03:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.884 [2024-11-05 03:26:42.486533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:28.884 [2024-11-05 03:26:42.486608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.884 [2024-11-05 03:26:42.486648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:28.884 [2024-11-05 03:26:42.486665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.884 [2024-11-05 03:26:42.487350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.884 [2024-11-05 03:26:42.487406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:28.884 [2024-11-05 03:26:42.487527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:28.884 [2024-11-05 03:26:42.487548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:28.884 [2024-11-05 03:26:42.487567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:28.884 [2024-11-05 03:26:42.487625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.884 [2024-11-05 03:26:42.500480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:28.884 spare 00:15:28.884 03:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.884 03:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:28.884 [2024-11-05 03:26:42.503060] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.263 "name": "raid_bdev1", 00:15:30.263 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:30.263 "strip_size_kb": 0, 00:15:30.263 "state": "online", 00:15:30.263 "raid_level": "raid1", 00:15:30.263 "superblock": true, 00:15:30.263 "num_base_bdevs": 4, 00:15:30.263 "num_base_bdevs_discovered": 3, 00:15:30.263 "num_base_bdevs_operational": 3, 00:15:30.263 "process": { 00:15:30.263 "type": "rebuild", 00:15:30.263 "target": "spare", 00:15:30.263 "progress": { 00:15:30.263 "blocks": 20480, 00:15:30.263 "percent": 32 00:15:30.263 } 00:15:30.263 }, 00:15:30.263 "base_bdevs_list": [ 00:15:30.263 { 00:15:30.263 "name": "spare", 00:15:30.263 "uuid": "7e506a13-b885-5f63-b8b7-eeac01dfa072", 00:15:30.263 "is_configured": true, 00:15:30.263 "data_offset": 2048, 00:15:30.263 "data_size": 63488 00:15:30.263 }, 00:15:30.263 { 00:15:30.263 "name": null, 00:15:30.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.263 "is_configured": false, 00:15:30.263 "data_offset": 2048, 00:15:30.263 "data_size": 63488 00:15:30.263 }, 00:15:30.263 { 00:15:30.263 "name": "BaseBdev3", 00:15:30.263 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:30.263 "is_configured": true, 00:15:30.263 "data_offset": 2048, 00:15:30.263 "data_size": 63488 00:15:30.263 }, 00:15:30.263 { 00:15:30.263 "name": "BaseBdev4", 00:15:30.263 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:30.263 "is_configured": true, 00:15:30.263 "data_offset": 2048, 00:15:30.263 "data_size": 63488 00:15:30.263 } 00:15:30.263 ] 00:15:30.263 }' 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.263 [2024-11-05 03:26:43.676191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.263 [2024-11-05 03:26:43.712140] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.263 [2024-11-05 03:26:43.712253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.263 [2024-11-05 03:26:43.712277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.263 [2024-11-05 03:26:43.712291] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.263 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.263 "name": "raid_bdev1", 00:15:30.263 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:30.263 "strip_size_kb": 0, 00:15:30.263 "state": "online", 00:15:30.263 "raid_level": "raid1", 00:15:30.263 "superblock": true, 00:15:30.263 "num_base_bdevs": 4, 00:15:30.263 "num_base_bdevs_discovered": 2, 00:15:30.263 "num_base_bdevs_operational": 2, 00:15:30.263 "base_bdevs_list": [ 00:15:30.263 { 00:15:30.263 "name": null, 00:15:30.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.264 "is_configured": false, 00:15:30.264 "data_offset": 0, 00:15:30.264 "data_size": 63488 00:15:30.264 }, 00:15:30.264 { 00:15:30.264 "name": null, 00:15:30.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.264 "is_configured": false, 00:15:30.264 "data_offset": 2048, 00:15:30.264 "data_size": 63488 00:15:30.264 }, 00:15:30.264 { 00:15:30.264 "name": "BaseBdev3", 00:15:30.264 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:30.264 "is_configured": true, 00:15:30.264 "data_offset": 2048, 00:15:30.264 "data_size": 63488 00:15:30.264 }, 00:15:30.264 { 00:15:30.264 "name": "BaseBdev4", 00:15:30.264 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:30.264 "is_configured": true, 00:15:30.264 "data_offset": 2048, 00:15:30.264 "data_size": 63488 00:15:30.264 } 00:15:30.264 ] 00:15:30.264 }' 00:15:30.264 03:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.264 03:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.833 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.833 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.834 "name": "raid_bdev1", 00:15:30.834 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:30.834 "strip_size_kb": 0, 00:15:30.834 "state": "online", 00:15:30.834 "raid_level": "raid1", 00:15:30.834 "superblock": true, 00:15:30.834 "num_base_bdevs": 4, 00:15:30.834 "num_base_bdevs_discovered": 2, 00:15:30.834 "num_base_bdevs_operational": 2, 00:15:30.834 "base_bdevs_list": [ 00:15:30.834 { 00:15:30.834 "name": null, 00:15:30.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.834 "is_configured": false, 00:15:30.834 "data_offset": 0, 00:15:30.834 "data_size": 63488 00:15:30.834 }, 00:15:30.834 { 00:15:30.834 "name": null, 00:15:30.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.834 "is_configured": false, 00:15:30.834 "data_offset": 2048, 00:15:30.834 "data_size": 63488 00:15:30.834 }, 00:15:30.834 { 00:15:30.834 "name": "BaseBdev3", 00:15:30.834 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:30.834 "is_configured": true, 00:15:30.834 "data_offset": 2048, 00:15:30.834 "data_size": 63488 00:15:30.834 }, 00:15:30.834 { 00:15:30.834 "name": "BaseBdev4", 00:15:30.834 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:30.834 "is_configured": true, 00:15:30.834 "data_offset": 2048, 00:15:30.834 "data_size": 63488 00:15:30.834 } 00:15:30.834 ] 00:15:30.834 }' 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.834 [2024-11-05 03:26:44.447334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:30.834 [2024-11-05 03:26:44.447413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.834 [2024-11-05 03:26:44.447442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:30.834 [2024-11-05 03:26:44.447460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.834 [2024-11-05 03:26:44.448003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.834 [2024-11-05 03:26:44.448057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:30.834 [2024-11-05 03:26:44.448153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:30.834 [2024-11-05 03:26:44.448182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:30.834 [2024-11-05 03:26:44.448194] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:30.834 [2024-11-05 03:26:44.448225] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:30.834 BaseBdev1 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.834 03:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.213 "name": "raid_bdev1", 00:15:32.213 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:32.213 "strip_size_kb": 0, 00:15:32.213 "state": "online", 00:15:32.213 "raid_level": "raid1", 00:15:32.213 "superblock": true, 00:15:32.213 "num_base_bdevs": 4, 00:15:32.213 "num_base_bdevs_discovered": 2, 00:15:32.213 "num_base_bdevs_operational": 2, 00:15:32.213 "base_bdevs_list": [ 00:15:32.213 { 00:15:32.213 "name": null, 00:15:32.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.213 "is_configured": false, 00:15:32.213 "data_offset": 0, 00:15:32.213 "data_size": 63488 00:15:32.213 }, 00:15:32.213 { 00:15:32.213 "name": null, 00:15:32.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.213 "is_configured": false, 00:15:32.213 "data_offset": 2048, 00:15:32.213 "data_size": 63488 00:15:32.213 }, 00:15:32.213 { 00:15:32.213 "name": "BaseBdev3", 00:15:32.213 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:32.213 "is_configured": true, 00:15:32.213 "data_offset": 2048, 00:15:32.213 "data_size": 63488 00:15:32.213 }, 00:15:32.213 { 00:15:32.213 "name": "BaseBdev4", 00:15:32.213 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:32.213 "is_configured": true, 00:15:32.213 "data_offset": 2048, 00:15:32.213 "data_size": 63488 00:15:32.213 } 00:15:32.213 ] 00:15:32.213 }' 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.213 03:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.472 "name": "raid_bdev1", 00:15:32.472 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:32.472 "strip_size_kb": 0, 00:15:32.472 "state": "online", 00:15:32.472 "raid_level": "raid1", 00:15:32.472 "superblock": true, 00:15:32.472 "num_base_bdevs": 4, 00:15:32.472 "num_base_bdevs_discovered": 2, 00:15:32.472 "num_base_bdevs_operational": 2, 00:15:32.472 "base_bdevs_list": [ 00:15:32.472 { 00:15:32.472 "name": null, 00:15:32.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.472 "is_configured": false, 00:15:32.472 "data_offset": 0, 00:15:32.472 "data_size": 63488 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "name": null, 00:15:32.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.472 "is_configured": false, 00:15:32.472 "data_offset": 2048, 00:15:32.472 "data_size": 63488 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "name": "BaseBdev3", 00:15:32.472 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:32.472 "is_configured": true, 00:15:32.472 "data_offset": 2048, 00:15:32.472 "data_size": 63488 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "name": "BaseBdev4", 00:15:32.472 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:32.472 "is_configured": true, 00:15:32.472 "data_offset": 2048, 00:15:32.472 "data_size": 63488 00:15:32.472 } 00:15:32.472 ] 00:15:32.472 }' 00:15:32.472 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.731 [2024-11-05 03:26:46.187932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.731 [2024-11-05 03:26:46.188170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:32.731 [2024-11-05 03:26:46.188208] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:32.731 request: 00:15:32.731 { 00:15:32.731 "base_bdev": "BaseBdev1", 00:15:32.731 "raid_bdev": "raid_bdev1", 00:15:32.731 "method": "bdev_raid_add_base_bdev", 00:15:32.731 "req_id": 1 00:15:32.731 } 00:15:32.731 Got JSON-RPC error response 00:15:32.731 response: 00:15:32.731 { 00:15:32.731 "code": -22, 00:15:32.731 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:32.731 } 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.731 03:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.668 "name": "raid_bdev1", 00:15:33.668 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:33.668 "strip_size_kb": 0, 00:15:33.668 "state": "online", 00:15:33.668 "raid_level": "raid1", 00:15:33.668 "superblock": true, 00:15:33.668 "num_base_bdevs": 4, 00:15:33.668 "num_base_bdevs_discovered": 2, 00:15:33.668 "num_base_bdevs_operational": 2, 00:15:33.668 "base_bdevs_list": [ 00:15:33.668 { 00:15:33.668 "name": null, 00:15:33.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.668 "is_configured": false, 00:15:33.668 "data_offset": 0, 00:15:33.668 "data_size": 63488 00:15:33.668 }, 00:15:33.668 { 00:15:33.668 "name": null, 00:15:33.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.668 "is_configured": false, 00:15:33.668 "data_offset": 2048, 00:15:33.668 "data_size": 63488 00:15:33.668 }, 00:15:33.668 { 00:15:33.668 "name": "BaseBdev3", 00:15:33.668 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:33.668 "is_configured": true, 00:15:33.668 "data_offset": 2048, 00:15:33.668 "data_size": 63488 00:15:33.668 }, 00:15:33.668 { 00:15:33.668 "name": "BaseBdev4", 00:15:33.668 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:33.668 "is_configured": true, 00:15:33.668 "data_offset": 2048, 00:15:33.668 "data_size": 63488 00:15:33.668 } 00:15:33.668 ] 00:15:33.668 }' 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.668 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.237 "name": "raid_bdev1", 00:15:34.237 "uuid": "392e7de0-d5f3-4cab-937b-93e5e87a8ef1", 00:15:34.237 "strip_size_kb": 0, 00:15:34.237 "state": "online", 00:15:34.237 "raid_level": "raid1", 00:15:34.237 "superblock": true, 00:15:34.237 "num_base_bdevs": 4, 00:15:34.237 "num_base_bdevs_discovered": 2, 00:15:34.237 "num_base_bdevs_operational": 2, 00:15:34.237 "base_bdevs_list": [ 00:15:34.237 { 00:15:34.237 "name": null, 00:15:34.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.237 "is_configured": false, 00:15:34.237 "data_offset": 0, 00:15:34.237 "data_size": 63488 00:15:34.237 }, 00:15:34.237 { 00:15:34.237 "name": null, 00:15:34.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.237 "is_configured": false, 00:15:34.237 "data_offset": 2048, 00:15:34.237 "data_size": 63488 00:15:34.237 }, 00:15:34.237 { 00:15:34.237 "name": "BaseBdev3", 00:15:34.237 "uuid": "54ecdca2-ddfd-5b0e-8915-e01acd216dc6", 00:15:34.237 "is_configured": true, 00:15:34.237 "data_offset": 2048, 00:15:34.237 "data_size": 63488 00:15:34.237 }, 00:15:34.237 { 00:15:34.237 "name": "BaseBdev4", 00:15:34.237 "uuid": "3b7005e8-6789-5c15-9155-3bc13efa557d", 00:15:34.237 "is_configured": true, 00:15:34.237 "data_offset": 2048, 00:15:34.237 "data_size": 63488 00:15:34.237 } 00:15:34.237 ] 00:15:34.237 }' 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78019 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78019 ']' 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78019 00:15:34.237 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:34.496 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.496 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78019 00:15:34.496 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.496 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.496 killing process with pid 78019 00:15:34.496 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78019' 00:15:34.496 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78019 00:15:34.496 Received shutdown signal, test time was about 60.000000 seconds 00:15:34.496 00:15:34.496 Latency(us) 00:15:34.496 [2024-11-05T03:26:48.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.496 [2024-11-05T03:26:48.135Z] =================================================================================================================== 00:15:34.496 [2024-11-05T03:26:48.135Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:34.496 [2024-11-05 03:26:47.900641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.496 03:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78019 00:15:34.496 [2024-11-05 03:26:47.900782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.496 [2024-11-05 03:26:47.900870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.496 [2024-11-05 03:26:47.900900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:34.755 [2024-11-05 03:26:48.290761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:35.692 00:15:35.692 real 0m28.627s 00:15:35.692 user 0m35.252s 00:15:35.692 sys 0m4.016s 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.692 ************************************ 00:15:35.692 END TEST raid_rebuild_test_sb 00:15:35.692 ************************************ 00:15:35.692 03:26:49 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:35.692 03:26:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:35.692 03:26:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:35.692 03:26:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.692 ************************************ 00:15:35.692 START TEST raid_rebuild_test_io 00:15:35.692 ************************************ 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.692 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78808 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78808 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 78808 ']' 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:35.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:35.693 03:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.952 [2024-11-05 03:26:49.386291] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:15:35.952 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:35.952 Zero copy mechanism will not be used. 00:15:35.952 [2024-11-05 03:26:49.386490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78808 ] 00:15:35.952 [2024-11-05 03:26:49.571655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.218 [2024-11-05 03:26:49.686918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.478 [2024-11-05 03:26:49.865634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.478 [2024-11-05 03:26:49.865719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.737 BaseBdev1_malloc 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.737 [2024-11-05 03:26:50.346248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:36.737 [2024-11-05 03:26:50.346364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.737 [2024-11-05 03:26:50.346396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:36.737 [2024-11-05 03:26:50.346415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.737 [2024-11-05 03:26:50.349205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.737 [2024-11-05 03:26:50.349273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:36.737 BaseBdev1 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.737 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 BaseBdev2_malloc 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 [2024-11-05 03:26:50.395799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:36.997 [2024-11-05 03:26:50.395915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.997 [2024-11-05 03:26:50.395940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:36.997 [2024-11-05 03:26:50.395958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.997 [2024-11-05 03:26:50.398913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.997 [2024-11-05 03:26:50.398972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:36.997 BaseBdev2 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 BaseBdev3_malloc 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.997 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 [2024-11-05 03:26:50.454843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:36.998 [2024-11-05 03:26:50.454945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.998 [2024-11-05 03:26:50.454976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:36.998 [2024-11-05 03:26:50.454993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.998 [2024-11-05 03:26:50.457831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.998 [2024-11-05 03:26:50.457915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:36.998 BaseBdev3 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 BaseBdev4_malloc 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 [2024-11-05 03:26:50.507372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:36.998 [2024-11-05 03:26:50.507460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.998 [2024-11-05 03:26:50.507489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:36.998 [2024-11-05 03:26:50.507506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.998 [2024-11-05 03:26:50.510404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.998 [2024-11-05 03:26:50.510484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:36.998 BaseBdev4 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 spare_malloc 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 spare_delay 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 [2024-11-05 03:26:50.566725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.998 [2024-11-05 03:26:50.566815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.998 [2024-11-05 03:26:50.566844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:36.998 [2024-11-05 03:26:50.566862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.998 [2024-11-05 03:26:50.569819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.998 [2024-11-05 03:26:50.569872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.998 spare 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 [2024-11-05 03:26:50.578892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.998 [2024-11-05 03:26:50.581544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.998 [2024-11-05 03:26:50.581659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.998 [2024-11-05 03:26:50.581768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.998 [2024-11-05 03:26:50.581879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:36.998 [2024-11-05 03:26:50.581902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:36.998 [2024-11-05 03:26:50.582275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:36.998 [2024-11-05 03:26:50.582557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:36.998 [2024-11-05 03:26:50.582577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:36.998 [2024-11-05 03:26:50.582815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.998 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.257 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.257 "name": "raid_bdev1", 00:15:37.257 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:37.257 "strip_size_kb": 0, 00:15:37.257 "state": "online", 00:15:37.257 "raid_level": "raid1", 00:15:37.257 "superblock": false, 00:15:37.257 "num_base_bdevs": 4, 00:15:37.257 "num_base_bdevs_discovered": 4, 00:15:37.257 "num_base_bdevs_operational": 4, 00:15:37.257 "base_bdevs_list": [ 00:15:37.257 { 00:15:37.257 "name": "BaseBdev1", 00:15:37.257 "uuid": "a33892a1-e6a5-56b1-81c1-389d47eac133", 00:15:37.257 "is_configured": true, 00:15:37.257 "data_offset": 0, 00:15:37.257 "data_size": 65536 00:15:37.257 }, 00:15:37.257 { 00:15:37.257 "name": "BaseBdev2", 00:15:37.257 "uuid": "c51dd3da-7db0-5bde-a4de-6d413c02b11c", 00:15:37.257 "is_configured": true, 00:15:37.257 "data_offset": 0, 00:15:37.257 "data_size": 65536 00:15:37.257 }, 00:15:37.257 { 00:15:37.257 "name": "BaseBdev3", 00:15:37.257 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:37.257 "is_configured": true, 00:15:37.257 "data_offset": 0, 00:15:37.257 "data_size": 65536 00:15:37.257 }, 00:15:37.257 { 00:15:37.257 "name": "BaseBdev4", 00:15:37.257 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:37.257 "is_configured": true, 00:15:37.258 "data_offset": 0, 00:15:37.258 "data_size": 65536 00:15:37.258 } 00:15:37.258 ] 00:15:37.258 }' 00:15:37.258 03:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.258 03:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.516 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.516 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.516 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.516 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:37.516 [2024-11-05 03:26:51.115425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.516 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.775 [2024-11-05 03:26:51.218986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.775 "name": "raid_bdev1", 00:15:37.775 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:37.775 "strip_size_kb": 0, 00:15:37.775 "state": "online", 00:15:37.775 "raid_level": "raid1", 00:15:37.775 "superblock": false, 00:15:37.775 "num_base_bdevs": 4, 00:15:37.775 "num_base_bdevs_discovered": 3, 00:15:37.775 "num_base_bdevs_operational": 3, 00:15:37.775 "base_bdevs_list": [ 00:15:37.775 { 00:15:37.775 "name": null, 00:15:37.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.775 "is_configured": false, 00:15:37.775 "data_offset": 0, 00:15:37.775 "data_size": 65536 00:15:37.775 }, 00:15:37.775 { 00:15:37.775 "name": "BaseBdev2", 00:15:37.775 "uuid": "c51dd3da-7db0-5bde-a4de-6d413c02b11c", 00:15:37.775 "is_configured": true, 00:15:37.775 "data_offset": 0, 00:15:37.775 "data_size": 65536 00:15:37.775 }, 00:15:37.775 { 00:15:37.775 "name": "BaseBdev3", 00:15:37.775 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:37.775 "is_configured": true, 00:15:37.775 "data_offset": 0, 00:15:37.775 "data_size": 65536 00:15:37.775 }, 00:15:37.775 { 00:15:37.775 "name": "BaseBdev4", 00:15:37.775 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:37.775 "is_configured": true, 00:15:37.775 "data_offset": 0, 00:15:37.775 "data_size": 65536 00:15:37.775 } 00:15:37.775 ] 00:15:37.775 }' 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.775 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.775 [2024-11-05 03:26:51.347090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:37.775 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:37.775 Zero copy mechanism will not be used. 00:15:37.775 Running I/O for 60 seconds... 00:15:38.342 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:38.342 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.342 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.342 [2024-11-05 03:26:51.778912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.342 03:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.342 03:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:38.342 [2024-11-05 03:26:51.876662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:38.342 [2024-11-05 03:26:51.879259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.600 [2024-11-05 03:26:51.995281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:38.600 [2024-11-05 03:26:51.996960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:38.600 [2024-11-05 03:26:52.226389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:38.600 [2024-11-05 03:26:52.227384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:39.195 138.00 IOPS, 414.00 MiB/s [2024-11-05T03:26:52.835Z] [2024-11-05 03:26:52.737129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.467 "name": "raid_bdev1", 00:15:39.467 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:39.467 "strip_size_kb": 0, 00:15:39.467 "state": "online", 00:15:39.467 "raid_level": "raid1", 00:15:39.467 "superblock": false, 00:15:39.467 "num_base_bdevs": 4, 00:15:39.467 "num_base_bdevs_discovered": 4, 00:15:39.467 "num_base_bdevs_operational": 4, 00:15:39.467 "process": { 00:15:39.467 "type": "rebuild", 00:15:39.467 "target": "spare", 00:15:39.467 "progress": { 00:15:39.467 "blocks": 10240, 00:15:39.467 "percent": 15 00:15:39.467 } 00:15:39.467 }, 00:15:39.467 "base_bdevs_list": [ 00:15:39.467 { 00:15:39.467 "name": "spare", 00:15:39.467 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:39.467 "is_configured": true, 00:15:39.467 "data_offset": 0, 00:15:39.467 "data_size": 65536 00:15:39.467 }, 00:15:39.467 { 00:15:39.467 "name": "BaseBdev2", 00:15:39.467 "uuid": "c51dd3da-7db0-5bde-a4de-6d413c02b11c", 00:15:39.467 "is_configured": true, 00:15:39.467 "data_offset": 0, 00:15:39.467 "data_size": 65536 00:15:39.467 }, 00:15:39.467 { 00:15:39.467 "name": "BaseBdev3", 00:15:39.467 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:39.467 "is_configured": true, 00:15:39.467 "data_offset": 0, 00:15:39.467 "data_size": 65536 00:15:39.467 }, 00:15:39.467 { 00:15:39.467 "name": "BaseBdev4", 00:15:39.467 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:39.467 "is_configured": true, 00:15:39.467 "data_offset": 0, 00:15:39.467 "data_size": 65536 00:15:39.467 } 00:15:39.467 ] 00:15:39.467 }' 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.467 03:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.467 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.467 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:39.467 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.468 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.468 [2024-11-05 03:26:53.010381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.468 [2024-11-05 03:26:53.070073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:39.468 [2024-11-05 03:26:53.071849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:39.727 [2024-11-05 03:26:53.181842] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.727 [2024-11-05 03:26:53.195579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.727 [2024-11-05 03:26:53.195657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.727 [2024-11-05 03:26:53.195673] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.727 [2024-11-05 03:26:53.235935] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.727 "name": "raid_bdev1", 00:15:39.727 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:39.727 "strip_size_kb": 0, 00:15:39.727 "state": "online", 00:15:39.727 "raid_level": "raid1", 00:15:39.727 "superblock": false, 00:15:39.727 "num_base_bdevs": 4, 00:15:39.727 "num_base_bdevs_discovered": 3, 00:15:39.727 "num_base_bdevs_operational": 3, 00:15:39.727 "base_bdevs_list": [ 00:15:39.727 { 00:15:39.727 "name": null, 00:15:39.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.727 "is_configured": false, 00:15:39.727 "data_offset": 0, 00:15:39.727 "data_size": 65536 00:15:39.727 }, 00:15:39.727 { 00:15:39.727 "name": "BaseBdev2", 00:15:39.727 "uuid": "c51dd3da-7db0-5bde-a4de-6d413c02b11c", 00:15:39.727 "is_configured": true, 00:15:39.727 "data_offset": 0, 00:15:39.727 "data_size": 65536 00:15:39.727 }, 00:15:39.727 { 00:15:39.727 "name": "BaseBdev3", 00:15:39.727 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:39.727 "is_configured": true, 00:15:39.727 "data_offset": 0, 00:15:39.727 "data_size": 65536 00:15:39.727 }, 00:15:39.727 { 00:15:39.727 "name": "BaseBdev4", 00:15:39.727 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:39.727 "is_configured": true, 00:15:39.727 "data_offset": 0, 00:15:39.727 "data_size": 65536 00:15:39.727 } 00:15:39.727 ] 00:15:39.727 }' 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.727 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.246 103.50 IOPS, 310.50 MiB/s [2024-11-05T03:26:53.885Z] 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.246 "name": "raid_bdev1", 00:15:40.246 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:40.246 "strip_size_kb": 0, 00:15:40.246 "state": "online", 00:15:40.246 "raid_level": "raid1", 00:15:40.246 "superblock": false, 00:15:40.246 "num_base_bdevs": 4, 00:15:40.246 "num_base_bdevs_discovered": 3, 00:15:40.246 "num_base_bdevs_operational": 3, 00:15:40.246 "base_bdevs_list": [ 00:15:40.246 { 00:15:40.246 "name": null, 00:15:40.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.246 "is_configured": false, 00:15:40.246 "data_offset": 0, 00:15:40.246 "data_size": 65536 00:15:40.246 }, 00:15:40.246 { 00:15:40.246 "name": "BaseBdev2", 00:15:40.246 "uuid": "c51dd3da-7db0-5bde-a4de-6d413c02b11c", 00:15:40.246 "is_configured": true, 00:15:40.246 "data_offset": 0, 00:15:40.246 "data_size": 65536 00:15:40.246 }, 00:15:40.246 { 00:15:40.246 "name": "BaseBdev3", 00:15:40.246 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:40.246 "is_configured": true, 00:15:40.246 "data_offset": 0, 00:15:40.246 "data_size": 65536 00:15:40.246 }, 00:15:40.246 { 00:15:40.246 "name": "BaseBdev4", 00:15:40.246 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:40.246 "is_configured": true, 00:15:40.246 "data_offset": 0, 00:15:40.246 "data_size": 65536 00:15:40.246 } 00:15:40.246 ] 00:15:40.246 }' 00:15:40.246 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.505 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.505 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.505 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.505 03:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.505 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.505 03:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.505 [2024-11-05 03:26:53.956074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.505 03:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.505 03:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:40.505 [2024-11-05 03:26:54.059153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:40.505 [2024-11-05 03:26:54.062109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.764 [2024-11-05 03:26:54.172591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:40.764 [2024-11-05 03:26:54.173298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:40.764 118.67 IOPS, 356.00 MiB/s [2024-11-05T03:26:54.403Z] [2024-11-05 03:26:54.392546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:40.764 [2024-11-05 03:26:54.392994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:41.332 [2024-11-05 03:26:54.729665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:41.332 [2024-11-05 03:26:54.843763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.591 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.591 "name": "raid_bdev1", 00:15:41.591 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:41.591 "strip_size_kb": 0, 00:15:41.591 "state": "online", 00:15:41.591 "raid_level": "raid1", 00:15:41.591 "superblock": false, 00:15:41.591 "num_base_bdevs": 4, 00:15:41.591 "num_base_bdevs_discovered": 4, 00:15:41.591 "num_base_bdevs_operational": 4, 00:15:41.591 "process": { 00:15:41.591 "type": "rebuild", 00:15:41.591 "target": "spare", 00:15:41.591 "progress": { 00:15:41.591 "blocks": 12288, 00:15:41.591 "percent": 18 00:15:41.591 } 00:15:41.591 }, 00:15:41.591 "base_bdevs_list": [ 00:15:41.591 { 00:15:41.591 "name": "spare", 00:15:41.591 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:41.592 "is_configured": true, 00:15:41.592 "data_offset": 0, 00:15:41.592 "data_size": 65536 00:15:41.592 }, 00:15:41.592 { 00:15:41.592 "name": "BaseBdev2", 00:15:41.592 "uuid": "c51dd3da-7db0-5bde-a4de-6d413c02b11c", 00:15:41.592 "is_configured": true, 00:15:41.592 "data_offset": 0, 00:15:41.592 "data_size": 65536 00:15:41.592 }, 00:15:41.592 { 00:15:41.592 "name": "BaseBdev3", 00:15:41.592 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:41.592 "is_configured": true, 00:15:41.592 "data_offset": 0, 00:15:41.592 "data_size": 65536 00:15:41.592 }, 00:15:41.592 { 00:15:41.592 "name": "BaseBdev4", 00:15:41.592 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:41.592 "is_configured": true, 00:15:41.592 "data_offset": 0, 00:15:41.592 "data_size": 65536 00:15:41.592 } 00:15:41.592 ] 00:15:41.592 }' 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.592 [2024-11-05 03:26:55.088444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.592 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.592 [2024-11-05 03:26:55.172188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.851 [2024-11-05 03:26:55.314680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:41.851 105.50 IOPS, 316.50 MiB/s [2024-11-05T03:26:55.490Z] [2024-11-05 03:26:55.418087] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:41.851 [2024-11-05 03:26:55.418159] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:41.851 [2024-11-05 03:26:55.426685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.851 "name": "raid_bdev1", 00:15:41.851 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:41.851 "strip_size_kb": 0, 00:15:41.851 "state": "online", 00:15:41.851 "raid_level": "raid1", 00:15:41.851 "superblock": false, 00:15:41.851 "num_base_bdevs": 4, 00:15:41.851 "num_base_bdevs_discovered": 3, 00:15:41.851 "num_base_bdevs_operational": 3, 00:15:41.851 "process": { 00:15:41.851 "type": "rebuild", 00:15:41.851 "target": "spare", 00:15:41.851 "progress": { 00:15:41.851 "blocks": 16384, 00:15:41.851 "percent": 25 00:15:41.851 } 00:15:41.851 }, 00:15:41.851 "base_bdevs_list": [ 00:15:41.851 { 00:15:41.851 "name": "spare", 00:15:41.851 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:41.851 "is_configured": true, 00:15:41.851 "data_offset": 0, 00:15:41.851 "data_size": 65536 00:15:41.851 }, 00:15:41.851 { 00:15:41.851 "name": null, 00:15:41.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.851 "is_configured": false, 00:15:41.851 "data_offset": 0, 00:15:41.851 "data_size": 65536 00:15:41.851 }, 00:15:41.851 { 00:15:41.851 "name": "BaseBdev3", 00:15:41.851 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:41.851 "is_configured": true, 00:15:41.851 "data_offset": 0, 00:15:41.851 "data_size": 65536 00:15:41.851 }, 00:15:41.851 { 00:15:41.851 "name": "BaseBdev4", 00:15:41.851 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:41.851 "is_configured": true, 00:15:41.851 "data_offset": 0, 00:15:41.851 "data_size": 65536 00:15:41.851 } 00:15:41.851 ] 00:15:41.851 }' 00:15:41.851 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=517 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.110 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.110 "name": "raid_bdev1", 00:15:42.110 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:42.111 "strip_size_kb": 0, 00:15:42.111 "state": "online", 00:15:42.111 "raid_level": "raid1", 00:15:42.111 "superblock": false, 00:15:42.111 "num_base_bdevs": 4, 00:15:42.111 "num_base_bdevs_discovered": 3, 00:15:42.111 "num_base_bdevs_operational": 3, 00:15:42.111 "process": { 00:15:42.111 "type": "rebuild", 00:15:42.111 "target": "spare", 00:15:42.111 "progress": { 00:15:42.111 "blocks": 16384, 00:15:42.111 "percent": 25 00:15:42.111 } 00:15:42.111 }, 00:15:42.111 "base_bdevs_list": [ 00:15:42.111 { 00:15:42.111 "name": "spare", 00:15:42.111 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:42.111 "is_configured": true, 00:15:42.111 "data_offset": 0, 00:15:42.111 "data_size": 65536 00:15:42.111 }, 00:15:42.111 { 00:15:42.111 "name": null, 00:15:42.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.111 "is_configured": false, 00:15:42.111 "data_offset": 0, 00:15:42.111 "data_size": 65536 00:15:42.111 }, 00:15:42.111 { 00:15:42.111 "name": "BaseBdev3", 00:15:42.111 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:42.111 "is_configured": true, 00:15:42.111 "data_offset": 0, 00:15:42.111 "data_size": 65536 00:15:42.111 }, 00:15:42.111 { 00:15:42.111 "name": "BaseBdev4", 00:15:42.111 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:42.111 "is_configured": true, 00:15:42.111 "data_offset": 0, 00:15:42.111 "data_size": 65536 00:15:42.111 } 00:15:42.111 ] 00:15:42.111 }' 00:15:42.111 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.111 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.111 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.369 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.369 03:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.369 [2024-11-05 03:26:55.819905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:42.628 [2024-11-05 03:26:56.050967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:42.628 [2024-11-05 03:26:56.051707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:42.887 93.00 IOPS, 279.00 MiB/s [2024-11-05T03:26:56.526Z] [2024-11-05 03:26:56.400903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:42.887 [2024-11-05 03:26:56.523923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:43.145 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.145 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.145 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.145 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.145 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.145 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.146 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.146 03:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.146 03:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.146 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.404 03:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.404 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.404 "name": "raid_bdev1", 00:15:43.404 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:43.404 "strip_size_kb": 0, 00:15:43.404 "state": "online", 00:15:43.404 "raid_level": "raid1", 00:15:43.404 "superblock": false, 00:15:43.404 "num_base_bdevs": 4, 00:15:43.404 "num_base_bdevs_discovered": 3, 00:15:43.404 "num_base_bdevs_operational": 3, 00:15:43.404 "process": { 00:15:43.404 "type": "rebuild", 00:15:43.404 "target": "spare", 00:15:43.404 "progress": { 00:15:43.404 "blocks": 28672, 00:15:43.404 "percent": 43 00:15:43.404 } 00:15:43.404 }, 00:15:43.404 "base_bdevs_list": [ 00:15:43.404 { 00:15:43.404 "name": "spare", 00:15:43.404 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:43.404 "is_configured": true, 00:15:43.404 "data_offset": 0, 00:15:43.404 "data_size": 65536 00:15:43.404 }, 00:15:43.404 { 00:15:43.404 "name": null, 00:15:43.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.404 "is_configured": false, 00:15:43.404 "data_offset": 0, 00:15:43.404 "data_size": 65536 00:15:43.404 }, 00:15:43.404 { 00:15:43.404 "name": "BaseBdev3", 00:15:43.404 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:43.404 "is_configured": true, 00:15:43.404 "data_offset": 0, 00:15:43.404 "data_size": 65536 00:15:43.404 }, 00:15:43.404 { 00:15:43.404 "name": "BaseBdev4", 00:15:43.404 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:43.404 "is_configured": true, 00:15:43.404 "data_offset": 0, 00:15:43.404 "data_size": 65536 00:15:43.404 } 00:15:43.404 ] 00:15:43.404 }' 00:15:43.404 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.404 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.404 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.404 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.404 03:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.404 [2024-11-05 03:26:57.014568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:43.972 87.33 IOPS, 262.00 MiB/s [2024-11-05T03:26:57.611Z] [2024-11-05 03:26:57.402779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:44.567 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.567 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.567 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.567 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.567 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.567 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.567 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.568 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.568 03:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.568 03:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.568 03:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.568 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.568 "name": "raid_bdev1", 00:15:44.568 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:44.568 "strip_size_kb": 0, 00:15:44.568 "state": "online", 00:15:44.568 "raid_level": "raid1", 00:15:44.568 "superblock": false, 00:15:44.568 "num_base_bdevs": 4, 00:15:44.568 "num_base_bdevs_discovered": 3, 00:15:44.568 "num_base_bdevs_operational": 3, 00:15:44.568 "process": { 00:15:44.568 "type": "rebuild", 00:15:44.568 "target": "spare", 00:15:44.568 "progress": { 00:15:44.568 "blocks": 49152, 00:15:44.568 "percent": 75 00:15:44.568 } 00:15:44.568 }, 00:15:44.568 "base_bdevs_list": [ 00:15:44.568 { 00:15:44.568 "name": "spare", 00:15:44.568 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:44.568 "is_configured": true, 00:15:44.568 "data_offset": 0, 00:15:44.568 "data_size": 65536 00:15:44.568 }, 00:15:44.568 { 00:15:44.568 "name": null, 00:15:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.568 "is_configured": false, 00:15:44.568 "data_offset": 0, 00:15:44.568 "data_size": 65536 00:15:44.568 }, 00:15:44.568 { 00:15:44.568 "name": "BaseBdev3", 00:15:44.568 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:44.568 "is_configured": true, 00:15:44.568 "data_offset": 0, 00:15:44.568 "data_size": 65536 00:15:44.568 }, 00:15:44.568 { 00:15:44.568 "name": "BaseBdev4", 00:15:44.568 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:44.568 "is_configured": true, 00:15:44.568 "data_offset": 0, 00:15:44.568 "data_size": 65536 00:15:44.568 } 00:15:44.568 ] 00:15:44.568 }' 00:15:44.568 03:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.568 03:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.568 03:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.568 03:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.568 03:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.827 80.43 IOPS, 241.29 MiB/s [2024-11-05T03:26:58.466Z] [2024-11-05 03:26:58.434551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:45.394 [2024-11-05 03:26:58.890242] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:45.394 [2024-11-05 03:26:58.997462] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:45.394 [2024-11-05 03:26:59.001153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.654 "name": "raid_bdev1", 00:15:45.654 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:45.654 "strip_size_kb": 0, 00:15:45.654 "state": "online", 00:15:45.654 "raid_level": "raid1", 00:15:45.654 "superblock": false, 00:15:45.654 "num_base_bdevs": 4, 00:15:45.654 "num_base_bdevs_discovered": 3, 00:15:45.654 "num_base_bdevs_operational": 3, 00:15:45.654 "base_bdevs_list": [ 00:15:45.654 { 00:15:45.654 "name": "spare", 00:15:45.654 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:45.654 "is_configured": true, 00:15:45.654 "data_offset": 0, 00:15:45.654 "data_size": 65536 00:15:45.654 }, 00:15:45.654 { 00:15:45.654 "name": null, 00:15:45.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.654 "is_configured": false, 00:15:45.654 "data_offset": 0, 00:15:45.654 "data_size": 65536 00:15:45.654 }, 00:15:45.654 { 00:15:45.654 "name": "BaseBdev3", 00:15:45.654 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:45.654 "is_configured": true, 00:15:45.654 "data_offset": 0, 00:15:45.654 "data_size": 65536 00:15:45.654 }, 00:15:45.654 { 00:15:45.654 "name": "BaseBdev4", 00:15:45.654 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:45.654 "is_configured": true, 00:15:45.654 "data_offset": 0, 00:15:45.654 "data_size": 65536 00:15:45.654 } 00:15:45.654 ] 00:15:45.654 }' 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.654 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.913 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.913 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.913 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.913 "name": "raid_bdev1", 00:15:45.913 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:45.913 "strip_size_kb": 0, 00:15:45.913 "state": "online", 00:15:45.913 "raid_level": "raid1", 00:15:45.913 "superblock": false, 00:15:45.913 "num_base_bdevs": 4, 00:15:45.913 "num_base_bdevs_discovered": 3, 00:15:45.913 "num_base_bdevs_operational": 3, 00:15:45.913 "base_bdevs_list": [ 00:15:45.913 { 00:15:45.913 "name": "spare", 00:15:45.913 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:45.913 "is_configured": true, 00:15:45.913 "data_offset": 0, 00:15:45.913 "data_size": 65536 00:15:45.913 }, 00:15:45.913 { 00:15:45.913 "name": null, 00:15:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.913 "is_configured": false, 00:15:45.913 "data_offset": 0, 00:15:45.913 "data_size": 65536 00:15:45.913 }, 00:15:45.913 { 00:15:45.913 "name": "BaseBdev3", 00:15:45.913 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:45.913 "is_configured": true, 00:15:45.913 "data_offset": 0, 00:15:45.913 "data_size": 65536 00:15:45.913 }, 00:15:45.913 { 00:15:45.914 "name": "BaseBdev4", 00:15:45.914 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:45.914 "is_configured": true, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 } 00:15:45.914 ] 00:15:45.914 }' 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.914 75.88 IOPS, 227.62 MiB/s [2024-11-05T03:26:59.553Z] 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.914 "name": "raid_bdev1", 00:15:45.914 "uuid": "84c40923-a05d-41c2-bed9-672fa0fb3e16", 00:15:45.914 "strip_size_kb": 0, 00:15:45.914 "state": "online", 00:15:45.914 "raid_level": "raid1", 00:15:45.914 "superblock": false, 00:15:45.914 "num_base_bdevs": 4, 00:15:45.914 "num_base_bdevs_discovered": 3, 00:15:45.914 "num_base_bdevs_operational": 3, 00:15:45.914 "base_bdevs_list": [ 00:15:45.914 { 00:15:45.914 "name": "spare", 00:15:45.914 "uuid": "038667e7-4763-5245-9a42-fe26405e698e", 00:15:45.914 "is_configured": true, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 }, 00:15:45.914 { 00:15:45.914 "name": null, 00:15:45.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.914 "is_configured": false, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 }, 00:15:45.914 { 00:15:45.914 "name": "BaseBdev3", 00:15:45.914 "uuid": "dad725f1-8315-5528-b7be-301c3b122d40", 00:15:45.914 "is_configured": true, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 }, 00:15:45.914 { 00:15:45.914 "name": "BaseBdev4", 00:15:45.914 "uuid": "938b9227-cf77-57d7-82d8-fabcf6e978ec", 00:15:45.914 "is_configured": true, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 } 00:15:45.914 ] 00:15:45.914 }' 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.914 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.484 03:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.484 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.484 03:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.484 [2024-11-05 03:26:59.962414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.484 [2024-11-05 03:26:59.962461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.484 00:15:46.484 Latency(us) 00:15:46.484 [2024-11-05T03:27:00.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.484 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:46.484 raid_bdev1 : 8.65 72.85 218.54 0.00 0.00 18696.33 260.65 120109.61 00:15:46.484 [2024-11-05T03:27:00.123Z] =================================================================================================================== 00:15:46.484 [2024-11-05T03:27:00.123Z] Total : 72.85 218.54 0.00 0.00 18696.33 260.65 120109.61 00:15:46.484 [2024-11-05 03:27:00.017177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.484 [2024-11-05 03:27:00.017278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.484 { 00:15:46.484 "results": [ 00:15:46.484 { 00:15:46.484 "job": "raid_bdev1", 00:15:46.484 "core_mask": "0x1", 00:15:46.484 "workload": "randrw", 00:15:46.484 "percentage": 50, 00:15:46.484 "status": "finished", 00:15:46.484 "queue_depth": 2, 00:15:46.484 "io_size": 3145728, 00:15:46.484 "runtime": 8.64845, 00:15:46.484 "iops": 72.8454231683134, 00:15:46.484 "mibps": 218.5362695049402, 00:15:46.484 "io_failed": 0, 00:15:46.484 "io_timeout": 0, 00:15:46.484 "avg_latency_us": 18696.326464646467, 00:15:46.484 "min_latency_us": 260.6545454545454, 00:15:46.484 "max_latency_us": 120109.61454545455 00:15:46.484 } 00:15:46.484 ], 00:15:46.484 "core_count": 1 00:15:46.484 } 00:15:46.484 [2024-11-05 03:27:00.017429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.484 [2024-11-05 03:27:00.017451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.484 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:46.743 /dev/nbd0 00:15:46.743 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.002 1+0 records in 00:15:47.002 1+0 records out 00:15:47.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345567 s, 11.9 MB/s 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.002 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:47.261 /dev/nbd1 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.261 1+0 records in 00:15:47.261 1+0 records out 00:15:47.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360472 s, 11.4 MB/s 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.261 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:47.525 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:47.525 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.525 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:47.525 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.525 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:47.525 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.525 03:27:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.793 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:48.052 /dev/nbd1 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.052 1+0 records in 00:15:48.052 1+0 records out 00:15:48.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374228 s, 10.9 MB/s 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.052 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:48.620 03:27:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.620 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:48.621 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.621 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:48.621 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.621 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78808 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 78808 ']' 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 78808 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78808 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:48.880 killing process with pid 78808 00:15:48.880 Received shutdown signal, test time was about 10.958983 seconds 00:15:48.880 00:15:48.880 Latency(us) 00:15:48.880 [2024-11-05T03:27:02.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.880 [2024-11-05T03:27:02.519Z] =================================================================================================================== 00:15:48.880 [2024-11-05T03:27:02.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78808' 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 78808 00:15:48.880 [2024-11-05 03:27:02.308773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.880 03:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 78808 00:15:49.139 [2024-11-05 03:27:02.673938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.089 03:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:50.089 00:15:50.089 real 0m14.448s 00:15:50.089 user 0m19.113s 00:15:50.089 sys 0m1.796s 00:15:50.089 03:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:50.089 ************************************ 00:15:50.089 END TEST raid_rebuild_test_io 00:15:50.089 ************************************ 00:15:50.089 03:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.348 03:27:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:50.348 03:27:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:50.348 03:27:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:50.348 03:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.348 ************************************ 00:15:50.348 START TEST raid_rebuild_test_sb_io 00:15:50.348 ************************************ 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79230 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79230 00:15:50.348 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79230 ']' 00:15:50.349 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.349 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:50.349 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:50.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.349 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.349 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:50.349 03:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.349 [2024-11-05 03:27:03.895446] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:15:50.349 [2024-11-05 03:27:03.895637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79230 ] 00:15:50.349 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:50.349 Zero copy mechanism will not be used. 00:15:50.607 [2024-11-05 03:27:04.081740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.607 [2024-11-05 03:27:04.207402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.866 [2024-11-05 03:27:04.393098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.866 [2024-11-05 03:27:04.393182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.434 BaseBdev1_malloc 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.434 [2024-11-05 03:27:04.952887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:51.434 [2024-11-05 03:27:04.952978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.434 [2024-11-05 03:27:04.953008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:51.434 [2024-11-05 03:27:04.953026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.434 [2024-11-05 03:27:04.956146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.434 [2024-11-05 03:27:04.956212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:51.434 BaseBdev1 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.434 03:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.434 BaseBdev2_malloc 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.434 [2024-11-05 03:27:05.011598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:51.434 [2024-11-05 03:27:05.011673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.434 [2024-11-05 03:27:05.011714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:51.434 [2024-11-05 03:27:05.011733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.434 [2024-11-05 03:27:05.014902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.434 [2024-11-05 03:27:05.014966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:51.434 BaseBdev2 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.434 BaseBdev3_malloc 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.434 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.693 [2024-11-05 03:27:05.075165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:51.693 [2024-11-05 03:27:05.075226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.693 [2024-11-05 03:27:05.075255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:51.693 [2024-11-05 03:27:05.075297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.693 [2024-11-05 03:27:05.078482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.693 [2024-11-05 03:27:05.078537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:51.693 BaseBdev3 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.693 BaseBdev4_malloc 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.693 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.693 [2024-11-05 03:27:05.134231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:51.693 [2024-11-05 03:27:05.134383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.693 [2024-11-05 03:27:05.134412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:51.693 [2024-11-05 03:27:05.134430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.693 [2024-11-05 03:27:05.137177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.694 [2024-11-05 03:27:05.137253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:51.694 BaseBdev4 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.694 spare_malloc 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.694 spare_delay 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.694 [2024-11-05 03:27:05.199902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.694 [2024-11-05 03:27:05.200006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.694 [2024-11-05 03:27:05.200035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:51.694 [2024-11-05 03:27:05.200053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.694 [2024-11-05 03:27:05.203076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.694 [2024-11-05 03:27:05.203132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.694 spare 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.694 [2024-11-05 03:27:05.211994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.694 [2024-11-05 03:27:05.214513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.694 [2024-11-05 03:27:05.214650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.694 [2024-11-05 03:27:05.214725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:51.694 [2024-11-05 03:27:05.214998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:51.694 [2024-11-05 03:27:05.215025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.694 [2024-11-05 03:27:05.215421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:51.694 [2024-11-05 03:27:05.215653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:51.694 [2024-11-05 03:27:05.215677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:51.694 [2024-11-05 03:27:05.215949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.694 "name": "raid_bdev1", 00:15:51.694 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:51.694 "strip_size_kb": 0, 00:15:51.694 "state": "online", 00:15:51.694 "raid_level": "raid1", 00:15:51.694 "superblock": true, 00:15:51.694 "num_base_bdevs": 4, 00:15:51.694 "num_base_bdevs_discovered": 4, 00:15:51.694 "num_base_bdevs_operational": 4, 00:15:51.694 "base_bdevs_list": [ 00:15:51.694 { 00:15:51.694 "name": "BaseBdev1", 00:15:51.694 "uuid": "01e3db14-f034-515e-b34a-e46032f9c651", 00:15:51.694 "is_configured": true, 00:15:51.694 "data_offset": 2048, 00:15:51.694 "data_size": 63488 00:15:51.694 }, 00:15:51.694 { 00:15:51.694 "name": "BaseBdev2", 00:15:51.694 "uuid": "fd976418-584d-5d69-9e52-a32040d0f15a", 00:15:51.694 "is_configured": true, 00:15:51.694 "data_offset": 2048, 00:15:51.694 "data_size": 63488 00:15:51.694 }, 00:15:51.694 { 00:15:51.694 "name": "BaseBdev3", 00:15:51.694 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:51.694 "is_configured": true, 00:15:51.694 "data_offset": 2048, 00:15:51.694 "data_size": 63488 00:15:51.694 }, 00:15:51.694 { 00:15:51.694 "name": "BaseBdev4", 00:15:51.694 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:51.694 "is_configured": true, 00:15:51.694 "data_offset": 2048, 00:15:51.694 "data_size": 63488 00:15:51.694 } 00:15:51.694 ] 00:15:51.694 }' 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.694 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.262 [2024-11-05 03:27:05.776749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.262 [2024-11-05 03:27:05.884241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.262 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.522 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.522 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.522 "name": "raid_bdev1", 00:15:52.522 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:52.522 "strip_size_kb": 0, 00:15:52.522 "state": "online", 00:15:52.522 "raid_level": "raid1", 00:15:52.522 "superblock": true, 00:15:52.522 "num_base_bdevs": 4, 00:15:52.522 "num_base_bdevs_discovered": 3, 00:15:52.522 "num_base_bdevs_operational": 3, 00:15:52.522 "base_bdevs_list": [ 00:15:52.522 { 00:15:52.522 "name": null, 00:15:52.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.522 "is_configured": false, 00:15:52.522 "data_offset": 0, 00:15:52.522 "data_size": 63488 00:15:52.522 }, 00:15:52.522 { 00:15:52.522 "name": "BaseBdev2", 00:15:52.522 "uuid": "fd976418-584d-5d69-9e52-a32040d0f15a", 00:15:52.522 "is_configured": true, 00:15:52.522 "data_offset": 2048, 00:15:52.522 "data_size": 63488 00:15:52.522 }, 00:15:52.522 { 00:15:52.522 "name": "BaseBdev3", 00:15:52.522 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:52.522 "is_configured": true, 00:15:52.522 "data_offset": 2048, 00:15:52.522 "data_size": 63488 00:15:52.522 }, 00:15:52.522 { 00:15:52.522 "name": "BaseBdev4", 00:15:52.522 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:52.522 "is_configured": true, 00:15:52.522 "data_offset": 2048, 00:15:52.522 "data_size": 63488 00:15:52.522 } 00:15:52.522 ] 00:15:52.522 }' 00:15:52.522 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.522 03:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.522 [2024-11-05 03:27:06.020460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:52.522 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:52.522 Zero copy mechanism will not be used. 00:15:52.522 Running I/O for 60 seconds... 00:15:53.090 03:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.090 03:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.090 03:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.090 [2024-11-05 03:27:06.436298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.090 03:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.090 03:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:53.090 [2024-11-05 03:27:06.492412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:53.090 [2024-11-05 03:27:06.494973] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.090 [2024-11-05 03:27:06.612746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:53.090 [2024-11-05 03:27:06.614582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:53.349 [2024-11-05 03:27:06.867356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:53.349 [2024-11-05 03:27:06.868368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:53.608 169.00 IOPS, 507.00 MiB/s [2024-11-05T03:27:07.247Z] [2024-11-05 03:27:07.236060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:53.867 [2024-11-05 03:27:07.356265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:53.867 [2024-11-05 03:27:07.356701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.867 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.126 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.126 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.126 "name": "raid_bdev1", 00:15:54.126 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:54.126 "strip_size_kb": 0, 00:15:54.126 "state": "online", 00:15:54.126 "raid_level": "raid1", 00:15:54.126 "superblock": true, 00:15:54.126 "num_base_bdevs": 4, 00:15:54.126 "num_base_bdevs_discovered": 4, 00:15:54.126 "num_base_bdevs_operational": 4, 00:15:54.126 "process": { 00:15:54.126 "type": "rebuild", 00:15:54.126 "target": "spare", 00:15:54.126 "progress": { 00:15:54.126 "blocks": 12288, 00:15:54.126 "percent": 19 00:15:54.126 } 00:15:54.126 }, 00:15:54.126 "base_bdevs_list": [ 00:15:54.126 { 00:15:54.126 "name": "spare", 00:15:54.126 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:15:54.126 "is_configured": true, 00:15:54.126 "data_offset": 2048, 00:15:54.126 "data_size": 63488 00:15:54.126 }, 00:15:54.126 { 00:15:54.126 "name": "BaseBdev2", 00:15:54.126 "uuid": "fd976418-584d-5d69-9e52-a32040d0f15a", 00:15:54.126 "is_configured": true, 00:15:54.126 "data_offset": 2048, 00:15:54.126 "data_size": 63488 00:15:54.126 }, 00:15:54.126 { 00:15:54.126 "name": "BaseBdev3", 00:15:54.126 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:54.126 "is_configured": true, 00:15:54.126 "data_offset": 2048, 00:15:54.126 "data_size": 63488 00:15:54.126 }, 00:15:54.126 { 00:15:54.126 "name": "BaseBdev4", 00:15:54.126 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:54.126 "is_configured": true, 00:15:54.126 "data_offset": 2048, 00:15:54.126 "data_size": 63488 00:15:54.126 } 00:15:54.126 ] 00:15:54.126 }' 00:15:54.126 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.126 [2024-11-05 03:27:07.593306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:54.126 [2024-11-05 03:27:07.594961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:54.126 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.126 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.127 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.127 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:54.127 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.127 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.127 [2024-11-05 03:27:07.661559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.127 [2024-11-05 03:27:07.698484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:54.127 [2024-11-05 03:27:07.718699] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.127 [2024-11-05 03:27:07.722167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.127 [2024-11-05 03:27:07.722215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.127 [2024-11-05 03:27:07.722241] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.386 [2024-11-05 03:27:07.768163] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.386 "name": "raid_bdev1", 00:15:54.386 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:54.386 "strip_size_kb": 0, 00:15:54.386 "state": "online", 00:15:54.386 "raid_level": "raid1", 00:15:54.386 "superblock": true, 00:15:54.386 "num_base_bdevs": 4, 00:15:54.386 "num_base_bdevs_discovered": 3, 00:15:54.386 "num_base_bdevs_operational": 3, 00:15:54.386 "base_bdevs_list": [ 00:15:54.386 { 00:15:54.386 "name": null, 00:15:54.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.386 "is_configured": false, 00:15:54.386 "data_offset": 0, 00:15:54.386 "data_size": 63488 00:15:54.386 }, 00:15:54.386 { 00:15:54.386 "name": "BaseBdev2", 00:15:54.386 "uuid": "fd976418-584d-5d69-9e52-a32040d0f15a", 00:15:54.386 "is_configured": true, 00:15:54.386 "data_offset": 2048, 00:15:54.386 "data_size": 63488 00:15:54.386 }, 00:15:54.386 { 00:15:54.386 "name": "BaseBdev3", 00:15:54.386 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:54.386 "is_configured": true, 00:15:54.386 "data_offset": 2048, 00:15:54.386 "data_size": 63488 00:15:54.386 }, 00:15:54.386 { 00:15:54.386 "name": "BaseBdev4", 00:15:54.386 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:54.386 "is_configured": true, 00:15:54.386 "data_offset": 2048, 00:15:54.386 "data_size": 63488 00:15:54.386 } 00:15:54.386 ] 00:15:54.386 }' 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.386 03:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 133.00 IOPS, 399.00 MiB/s [2024-11-05T03:27:08.551Z] 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.912 "name": "raid_bdev1", 00:15:54.912 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:54.912 "strip_size_kb": 0, 00:15:54.912 "state": "online", 00:15:54.912 "raid_level": "raid1", 00:15:54.912 "superblock": true, 00:15:54.912 "num_base_bdevs": 4, 00:15:54.912 "num_base_bdevs_discovered": 3, 00:15:54.912 "num_base_bdevs_operational": 3, 00:15:54.912 "base_bdevs_list": [ 00:15:54.912 { 00:15:54.912 "name": null, 00:15:54.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.912 "is_configured": false, 00:15:54.912 "data_offset": 0, 00:15:54.912 "data_size": 63488 00:15:54.912 }, 00:15:54.912 { 00:15:54.912 "name": "BaseBdev2", 00:15:54.912 "uuid": "fd976418-584d-5d69-9e52-a32040d0f15a", 00:15:54.912 "is_configured": true, 00:15:54.912 "data_offset": 2048, 00:15:54.912 "data_size": 63488 00:15:54.912 }, 00:15:54.912 { 00:15:54.912 "name": "BaseBdev3", 00:15:54.912 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:54.912 "is_configured": true, 00:15:54.912 "data_offset": 2048, 00:15:54.912 "data_size": 63488 00:15:54.912 }, 00:15:54.912 { 00:15:54.912 "name": "BaseBdev4", 00:15:54.912 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:54.912 "is_configured": true, 00:15:54.912 "data_offset": 2048, 00:15:54.912 "data_size": 63488 00:15:54.912 } 00:15:54.912 ] 00:15:54.912 }' 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.912 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 [2024-11-05 03:27:08.542678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.171 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.171 03:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:55.171 [2024-11-05 03:27:08.641737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:55.171 [2024-11-05 03:27:08.644591] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.430 [2024-11-05 03:27:08.965898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:55.999 142.67 IOPS, 428.00 MiB/s [2024-11-05T03:27:09.638Z] [2024-11-05 03:27:09.408314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.999 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.257 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.257 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.257 "name": "raid_bdev1", 00:15:56.257 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:56.257 "strip_size_kb": 0, 00:15:56.257 "state": "online", 00:15:56.257 "raid_level": "raid1", 00:15:56.257 "superblock": true, 00:15:56.258 "num_base_bdevs": 4, 00:15:56.258 "num_base_bdevs_discovered": 4, 00:15:56.258 "num_base_bdevs_operational": 4, 00:15:56.258 "process": { 00:15:56.258 "type": "rebuild", 00:15:56.258 "target": "spare", 00:15:56.258 "progress": { 00:15:56.258 "blocks": 12288, 00:15:56.258 "percent": 19 00:15:56.258 } 00:15:56.258 }, 00:15:56.258 "base_bdevs_list": [ 00:15:56.258 { 00:15:56.258 "name": "spare", 00:15:56.258 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:15:56.258 "is_configured": true, 00:15:56.258 "data_offset": 2048, 00:15:56.258 "data_size": 63488 00:15:56.258 }, 00:15:56.258 { 00:15:56.258 "name": "BaseBdev2", 00:15:56.258 "uuid": "fd976418-584d-5d69-9e52-a32040d0f15a", 00:15:56.258 "is_configured": true, 00:15:56.258 "data_offset": 2048, 00:15:56.258 "data_size": 63488 00:15:56.258 }, 00:15:56.258 { 00:15:56.258 "name": "BaseBdev3", 00:15:56.258 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:56.258 "is_configured": true, 00:15:56.258 "data_offset": 2048, 00:15:56.258 "data_size": 63488 00:15:56.258 }, 00:15:56.258 { 00:15:56.258 "name": "BaseBdev4", 00:15:56.258 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:56.258 "is_configured": true, 00:15:56.258 "data_offset": 2048, 00:15:56.258 "data_size": 63488 00:15:56.258 } 00:15:56.258 ] 00:15:56.258 }' 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:56.258 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.258 03:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 [2024-11-05 03:27:09.811484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.516 125.25 IOPS, 375.75 MiB/s [2024-11-05T03:27:10.155Z] [2024-11-05 03:27:10.032739] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:56.516 [2024-11-05 03:27:10.032813] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.516 "name": "raid_bdev1", 00:15:56.516 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:56.516 "strip_size_kb": 0, 00:15:56.516 "state": "online", 00:15:56.516 "raid_level": "raid1", 00:15:56.516 "superblock": true, 00:15:56.516 "num_base_bdevs": 4, 00:15:56.516 "num_base_bdevs_discovered": 3, 00:15:56.516 "num_base_bdevs_operational": 3, 00:15:56.516 "process": { 00:15:56.516 "type": "rebuild", 00:15:56.516 "target": "spare", 00:15:56.516 "progress": { 00:15:56.516 "blocks": 18432, 00:15:56.516 "percent": 29 00:15:56.516 } 00:15:56.516 }, 00:15:56.516 "base_bdevs_list": [ 00:15:56.516 { 00:15:56.516 "name": "spare", 00:15:56.516 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:15:56.516 "is_configured": true, 00:15:56.516 "data_offset": 2048, 00:15:56.516 "data_size": 63488 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "name": null, 00:15:56.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.516 "is_configured": false, 00:15:56.516 "data_offset": 0, 00:15:56.516 "data_size": 63488 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "name": "BaseBdev3", 00:15:56.516 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:56.516 "is_configured": true, 00:15:56.516 "data_offset": 2048, 00:15:56.516 "data_size": 63488 00:15:56.516 }, 00:15:56.516 { 00:15:56.516 "name": "BaseBdev4", 00:15:56.516 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:56.516 "is_configured": true, 00:15:56.516 "data_offset": 2048, 00:15:56.516 "data_size": 63488 00:15:56.516 } 00:15:56.516 ] 00:15:56.516 }' 00:15:56.516 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.775 [2024-11-05 03:27:10.172844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.775 [2024-11-05 03:27:10.174209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=532 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.775 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.776 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.776 "name": "raid_bdev1", 00:15:56.776 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:56.776 "strip_size_kb": 0, 00:15:56.776 "state": "online", 00:15:56.776 "raid_level": "raid1", 00:15:56.776 "superblock": true, 00:15:56.776 "num_base_bdevs": 4, 00:15:56.776 "num_base_bdevs_discovered": 3, 00:15:56.776 "num_base_bdevs_operational": 3, 00:15:56.776 "process": { 00:15:56.776 "type": "rebuild", 00:15:56.776 "target": "spare", 00:15:56.776 "progress": { 00:15:56.776 "blocks": 20480, 00:15:56.776 "percent": 32 00:15:56.776 } 00:15:56.776 }, 00:15:56.776 "base_bdevs_list": [ 00:15:56.776 { 00:15:56.776 "name": "spare", 00:15:56.776 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:15:56.776 "is_configured": true, 00:15:56.776 "data_offset": 2048, 00:15:56.776 "data_size": 63488 00:15:56.776 }, 00:15:56.776 { 00:15:56.776 "name": null, 00:15:56.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.776 "is_configured": false, 00:15:56.776 "data_offset": 0, 00:15:56.776 "data_size": 63488 00:15:56.776 }, 00:15:56.776 { 00:15:56.776 "name": "BaseBdev3", 00:15:56.776 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:56.776 "is_configured": true, 00:15:56.776 "data_offset": 2048, 00:15:56.776 "data_size": 63488 00:15:56.776 }, 00:15:56.776 { 00:15:56.776 "name": "BaseBdev4", 00:15:56.776 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:56.776 "is_configured": true, 00:15:56.776 "data_offset": 2048, 00:15:56.776 "data_size": 63488 00:15:56.776 } 00:15:56.776 ] 00:15:56.776 }' 00:15:56.776 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.776 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.776 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.776 [2024-11-05 03:27:10.392608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:56.776 [2024-11-05 03:27:10.393010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:56.776 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.776 03:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.344 [2024-11-05 03:27:10.864525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:57.862 109.80 IOPS, 329.40 MiB/s [2024-11-05T03:27:11.501Z] [2024-11-05 03:27:11.242452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.862 "name": "raid_bdev1", 00:15:57.862 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:57.862 "strip_size_kb": 0, 00:15:57.862 "state": "online", 00:15:57.862 "raid_level": "raid1", 00:15:57.862 "superblock": true, 00:15:57.862 "num_base_bdevs": 4, 00:15:57.862 "num_base_bdevs_discovered": 3, 00:15:57.862 "num_base_bdevs_operational": 3, 00:15:57.862 "process": { 00:15:57.862 "type": "rebuild", 00:15:57.862 "target": "spare", 00:15:57.862 "progress": { 00:15:57.862 "blocks": 34816, 00:15:57.862 "percent": 54 00:15:57.862 } 00:15:57.862 }, 00:15:57.862 "base_bdevs_list": [ 00:15:57.862 { 00:15:57.862 "name": "spare", 00:15:57.862 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:15:57.862 "is_configured": true, 00:15:57.862 "data_offset": 2048, 00:15:57.862 "data_size": 63488 00:15:57.862 }, 00:15:57.862 { 00:15:57.862 "name": null, 00:15:57.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.862 "is_configured": false, 00:15:57.862 "data_offset": 0, 00:15:57.862 "data_size": 63488 00:15:57.862 }, 00:15:57.862 { 00:15:57.862 "name": "BaseBdev3", 00:15:57.862 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:57.862 "is_configured": true, 00:15:57.862 "data_offset": 2048, 00:15:57.862 "data_size": 63488 00:15:57.862 }, 00:15:57.862 { 00:15:57.862 "name": "BaseBdev4", 00:15:57.862 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:57.862 "is_configured": true, 00:15:57.862 "data_offset": 2048, 00:15:57.862 "data_size": 63488 00:15:57.862 } 00:15:57.862 ] 00:15:57.862 }' 00:15:57.862 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.121 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.121 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.121 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.122 03:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.122 [2024-11-05 03:27:11.574924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:58.122 [2024-11-05 03:27:11.684641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:58.688 99.50 IOPS, 298.50 MiB/s [2024-11-05T03:27:12.327Z] [2024-11-05 03:27:12.126894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:58.688 [2024-11-05 03:27:12.127719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:58.947 [2024-11-05 03:27:12.464261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.947 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.207 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.207 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.207 "name": "raid_bdev1", 00:15:59.207 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:15:59.207 "strip_size_kb": 0, 00:15:59.207 "state": "online", 00:15:59.207 "raid_level": "raid1", 00:15:59.207 "superblock": true, 00:15:59.207 "num_base_bdevs": 4, 00:15:59.207 "num_base_bdevs_discovered": 3, 00:15:59.207 "num_base_bdevs_operational": 3, 00:15:59.207 "process": { 00:15:59.207 "type": "rebuild", 00:15:59.207 "target": "spare", 00:15:59.207 "progress": { 00:15:59.207 "blocks": 51200, 00:15:59.207 "percent": 80 00:15:59.207 } 00:15:59.207 }, 00:15:59.207 "base_bdevs_list": [ 00:15:59.207 { 00:15:59.207 "name": "spare", 00:15:59.207 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:15:59.207 "is_configured": true, 00:15:59.207 "data_offset": 2048, 00:15:59.207 "data_size": 63488 00:15:59.207 }, 00:15:59.207 { 00:15:59.207 "name": null, 00:15:59.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.207 "is_configured": false, 00:15:59.207 "data_offset": 0, 00:15:59.207 "data_size": 63488 00:15:59.207 }, 00:15:59.207 { 00:15:59.207 "name": "BaseBdev3", 00:15:59.207 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:15:59.207 "is_configured": true, 00:15:59.207 "data_offset": 2048, 00:15:59.207 "data_size": 63488 00:15:59.207 }, 00:15:59.207 { 00:15:59.207 "name": "BaseBdev4", 00:15:59.207 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:15:59.207 "is_configured": true, 00:15:59.207 "data_offset": 2048, 00:15:59.207 "data_size": 63488 00:15:59.207 } 00:15:59.207 ] 00:15:59.207 }' 00:15:59.207 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.207 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.207 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.207 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.207 03:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.207 [2024-11-05 03:27:12.816534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:59.470 89.00 IOPS, 267.00 MiB/s [2024-11-05T03:27:13.109Z] [2024-11-05 03:27:13.039161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:59.738 [2024-11-05 03:27:13.373513] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:59.996 [2024-11-05 03:27:13.479962] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:59.996 [2024-11-05 03:27:13.483806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.255 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.255 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.255 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.255 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.255 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.255 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.255 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.256 "name": "raid_bdev1", 00:16:00.256 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:00.256 "strip_size_kb": 0, 00:16:00.256 "state": "online", 00:16:00.256 "raid_level": "raid1", 00:16:00.256 "superblock": true, 00:16:00.256 "num_base_bdevs": 4, 00:16:00.256 "num_base_bdevs_discovered": 3, 00:16:00.256 "num_base_bdevs_operational": 3, 00:16:00.256 "base_bdevs_list": [ 00:16:00.256 { 00:16:00.256 "name": "spare", 00:16:00.256 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:16:00.256 "is_configured": true, 00:16:00.256 "data_offset": 2048, 00:16:00.256 "data_size": 63488 00:16:00.256 }, 00:16:00.256 { 00:16:00.256 "name": null, 00:16:00.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.256 "is_configured": false, 00:16:00.256 "data_offset": 0, 00:16:00.256 "data_size": 63488 00:16:00.256 }, 00:16:00.256 { 00:16:00.256 "name": "BaseBdev3", 00:16:00.256 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:00.256 "is_configured": true, 00:16:00.256 "data_offset": 2048, 00:16:00.256 "data_size": 63488 00:16:00.256 }, 00:16:00.256 { 00:16:00.256 "name": "BaseBdev4", 00:16:00.256 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:00.256 "is_configured": true, 00:16:00.256 "data_offset": 2048, 00:16:00.256 "data_size": 63488 00:16:00.256 } 00:16:00.256 ] 00:16:00.256 }' 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:00.256 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.515 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.516 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.516 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.516 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.516 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.516 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.516 "name": "raid_bdev1", 00:16:00.516 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:00.516 "strip_size_kb": 0, 00:16:00.516 "state": "online", 00:16:00.516 "raid_level": "raid1", 00:16:00.516 "superblock": true, 00:16:00.516 "num_base_bdevs": 4, 00:16:00.516 "num_base_bdevs_discovered": 3, 00:16:00.516 "num_base_bdevs_operational": 3, 00:16:00.516 "base_bdevs_list": [ 00:16:00.516 { 00:16:00.516 "name": "spare", 00:16:00.516 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:16:00.516 "is_configured": true, 00:16:00.516 "data_offset": 2048, 00:16:00.516 "data_size": 63488 00:16:00.516 }, 00:16:00.516 { 00:16:00.516 "name": null, 00:16:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.516 "is_configured": false, 00:16:00.516 "data_offset": 0, 00:16:00.516 "data_size": 63488 00:16:00.516 }, 00:16:00.516 { 00:16:00.516 "name": "BaseBdev3", 00:16:00.516 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:00.516 "is_configured": true, 00:16:00.516 "data_offset": 2048, 00:16:00.516 "data_size": 63488 00:16:00.516 }, 00:16:00.516 { 00:16:00.516 "name": "BaseBdev4", 00:16:00.516 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:00.516 "is_configured": true, 00:16:00.516 "data_offset": 2048, 00:16:00.516 "data_size": 63488 00:16:00.516 } 00:16:00.516 ] 00:16:00.516 }' 00:16:00.516 03:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.516 81.50 IOPS, 244.50 MiB/s [2024-11-05T03:27:14.155Z] 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.516 "name": "raid_bdev1", 00:16:00.516 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:00.516 "strip_size_kb": 0, 00:16:00.516 "state": "online", 00:16:00.516 "raid_level": "raid1", 00:16:00.516 "superblock": true, 00:16:00.516 "num_base_bdevs": 4, 00:16:00.516 "num_base_bdevs_discovered": 3, 00:16:00.516 "num_base_bdevs_operational": 3, 00:16:00.516 "base_bdevs_list": [ 00:16:00.516 { 00:16:00.516 "name": "spare", 00:16:00.516 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:16:00.516 "is_configured": true, 00:16:00.516 "data_offset": 2048, 00:16:00.516 "data_size": 63488 00:16:00.516 }, 00:16:00.516 { 00:16:00.516 "name": null, 00:16:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.516 "is_configured": false, 00:16:00.516 "data_offset": 0, 00:16:00.516 "data_size": 63488 00:16:00.516 }, 00:16:00.516 { 00:16:00.516 "name": "BaseBdev3", 00:16:00.516 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:00.516 "is_configured": true, 00:16:00.516 "data_offset": 2048, 00:16:00.516 "data_size": 63488 00:16:00.516 }, 00:16:00.516 { 00:16:00.516 "name": "BaseBdev4", 00:16:00.516 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:00.516 "is_configured": true, 00:16:00.516 "data_offset": 2048, 00:16:00.516 "data_size": 63488 00:16:00.516 } 00:16:00.516 ] 00:16:00.516 }' 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.516 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.085 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.085 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.085 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.085 [2024-11-05 03:27:14.589043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.085 [2024-11-05 03:27:14.589079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.085 00:16:01.085 Latency(us) 00:16:01.085 [2024-11-05T03:27:14.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.085 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:01.085 raid_bdev1 : 8.68 78.14 234.41 0.00 0.00 17119.55 269.96 123922.62 00:16:01.085 [2024-11-05T03:27:14.724Z] =================================================================================================================== 00:16:01.085 [2024-11-05T03:27:14.724Z] Total : 78.14 234.41 0.00 0.00 17119.55 269.96 123922.62 00:16:01.085 [2024-11-05 03:27:14.719324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.085 [2024-11-05 03:27:14.719401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.085 { 00:16:01.085 "results": [ 00:16:01.085 { 00:16:01.085 "job": "raid_bdev1", 00:16:01.085 "core_mask": "0x1", 00:16:01.085 "workload": "randrw", 00:16:01.085 "percentage": 50, 00:16:01.085 "status": "finished", 00:16:01.085 "queue_depth": 2, 00:16:01.085 "io_size": 3145728, 00:16:01.085 "runtime": 8.676939, 00:16:01.085 "iops": 78.13815448051439, 00:16:01.085 "mibps": 234.41446344154315, 00:16:01.085 "io_failed": 0, 00:16:01.085 "io_timeout": 0, 00:16:01.085 "avg_latency_us": 17119.552737999464, 00:16:01.085 "min_latency_us": 269.96363636363634, 00:16:01.085 "max_latency_us": 123922.61818181818 00:16:01.085 } 00:16:01.085 ], 00:16:01.085 "core_count": 1 00:16:01.085 } 00:16:01.085 [2024-11-05 03:27:14.719555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.085 [2024-11-05 03:27:14.719575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.344 03:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:01.603 /dev/nbd0 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:01.603 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.603 1+0 records in 00:16:01.604 1+0 records out 00:16:01.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422214 s, 9.7 MB/s 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.604 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:01.862 /dev/nbd1 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.863 1+0 records in 00:16:01.863 1+0 records out 00:16:01.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324847 s, 12.6 MB/s 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:01.863 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:02.122 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:02.122 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.122 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:02.122 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.122 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:02.122 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.122 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:02.381 03:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:02.640 /dev/nbd1 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.640 1+0 records in 00:16:02.640 1+0 records out 00:16:02.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003249 s, 12.6 MB/s 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.640 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.899 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.467 [2024-11-05 03:27:16.824056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.467 [2024-11-05 03:27:16.824113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.467 [2024-11-05 03:27:16.824143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:03.467 [2024-11-05 03:27:16.824166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.467 [2024-11-05 03:27:16.827217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.467 [2024-11-05 03:27:16.827257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.467 [2024-11-05 03:27:16.827445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:03.467 [2024-11-05 03:27:16.827536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.467 [2024-11-05 03:27:16.827733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.467 [2024-11-05 03:27:16.827883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:03.467 spare 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.467 [2024-11-05 03:27:16.928013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:03.467 [2024-11-05 03:27:16.928055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:03.467 [2024-11-05 03:27:16.928521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:03.467 [2024-11-05 03:27:16.928778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:03.467 [2024-11-05 03:27:16.928809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:03.467 [2024-11-05 03:27:16.929081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.467 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.467 "name": "raid_bdev1", 00:16:03.467 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:03.467 "strip_size_kb": 0, 00:16:03.468 "state": "online", 00:16:03.468 "raid_level": "raid1", 00:16:03.468 "superblock": true, 00:16:03.468 "num_base_bdevs": 4, 00:16:03.468 "num_base_bdevs_discovered": 3, 00:16:03.468 "num_base_bdevs_operational": 3, 00:16:03.468 "base_bdevs_list": [ 00:16:03.468 { 00:16:03.468 "name": "spare", 00:16:03.468 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:16:03.468 "is_configured": true, 00:16:03.468 "data_offset": 2048, 00:16:03.468 "data_size": 63488 00:16:03.468 }, 00:16:03.468 { 00:16:03.468 "name": null, 00:16:03.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.468 "is_configured": false, 00:16:03.468 "data_offset": 2048, 00:16:03.468 "data_size": 63488 00:16:03.468 }, 00:16:03.468 { 00:16:03.468 "name": "BaseBdev3", 00:16:03.468 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:03.468 "is_configured": true, 00:16:03.468 "data_offset": 2048, 00:16:03.468 "data_size": 63488 00:16:03.468 }, 00:16:03.468 { 00:16:03.468 "name": "BaseBdev4", 00:16:03.468 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:03.468 "is_configured": true, 00:16:03.468 "data_offset": 2048, 00:16:03.468 "data_size": 63488 00:16:03.468 } 00:16:03.468 ] 00:16:03.468 }' 00:16:03.468 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.468 03:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.036 "name": "raid_bdev1", 00:16:04.036 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:04.036 "strip_size_kb": 0, 00:16:04.036 "state": "online", 00:16:04.036 "raid_level": "raid1", 00:16:04.036 "superblock": true, 00:16:04.036 "num_base_bdevs": 4, 00:16:04.036 "num_base_bdevs_discovered": 3, 00:16:04.036 "num_base_bdevs_operational": 3, 00:16:04.036 "base_bdevs_list": [ 00:16:04.036 { 00:16:04.036 "name": "spare", 00:16:04.036 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:16:04.036 "is_configured": true, 00:16:04.036 "data_offset": 2048, 00:16:04.036 "data_size": 63488 00:16:04.036 }, 00:16:04.036 { 00:16:04.036 "name": null, 00:16:04.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.036 "is_configured": false, 00:16:04.036 "data_offset": 2048, 00:16:04.036 "data_size": 63488 00:16:04.036 }, 00:16:04.036 { 00:16:04.036 "name": "BaseBdev3", 00:16:04.036 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:04.036 "is_configured": true, 00:16:04.036 "data_offset": 2048, 00:16:04.036 "data_size": 63488 00:16:04.036 }, 00:16:04.036 { 00:16:04.036 "name": "BaseBdev4", 00:16:04.036 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:04.036 "is_configured": true, 00:16:04.036 "data_offset": 2048, 00:16:04.036 "data_size": 63488 00:16:04.036 } 00:16:04.036 ] 00:16:04.036 }' 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.036 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.320 [2024-11-05 03:27:17.689398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.320 "name": "raid_bdev1", 00:16:04.320 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:04.320 "strip_size_kb": 0, 00:16:04.320 "state": "online", 00:16:04.320 "raid_level": "raid1", 00:16:04.320 "superblock": true, 00:16:04.320 "num_base_bdevs": 4, 00:16:04.320 "num_base_bdevs_discovered": 2, 00:16:04.320 "num_base_bdevs_operational": 2, 00:16:04.320 "base_bdevs_list": [ 00:16:04.320 { 00:16:04.320 "name": null, 00:16:04.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.320 "is_configured": false, 00:16:04.320 "data_offset": 0, 00:16:04.320 "data_size": 63488 00:16:04.320 }, 00:16:04.320 { 00:16:04.320 "name": null, 00:16:04.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.320 "is_configured": false, 00:16:04.320 "data_offset": 2048, 00:16:04.320 "data_size": 63488 00:16:04.320 }, 00:16:04.320 { 00:16:04.320 "name": "BaseBdev3", 00:16:04.320 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:04.320 "is_configured": true, 00:16:04.320 "data_offset": 2048, 00:16:04.320 "data_size": 63488 00:16:04.320 }, 00:16:04.320 { 00:16:04.320 "name": "BaseBdev4", 00:16:04.320 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:04.320 "is_configured": true, 00:16:04.320 "data_offset": 2048, 00:16:04.320 "data_size": 63488 00:16:04.320 } 00:16:04.320 ] 00:16:04.320 }' 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.320 03:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.886 03:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:04.886 03:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.886 03:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.886 [2024-11-05 03:27:18.253781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.886 [2024-11-05 03:27:18.254056] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:04.886 [2024-11-05 03:27:18.254088] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:04.886 [2024-11-05 03:27:18.254136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.887 [2024-11-05 03:27:18.269175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:04.887 03:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.887 03:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:04.887 [2024-11-05 03:27:18.271941] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.822 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.822 "name": "raid_bdev1", 00:16:05.822 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:05.822 "strip_size_kb": 0, 00:16:05.822 "state": "online", 00:16:05.822 "raid_level": "raid1", 00:16:05.822 "superblock": true, 00:16:05.822 "num_base_bdevs": 4, 00:16:05.822 "num_base_bdevs_discovered": 3, 00:16:05.822 "num_base_bdevs_operational": 3, 00:16:05.822 "process": { 00:16:05.822 "type": "rebuild", 00:16:05.822 "target": "spare", 00:16:05.822 "progress": { 00:16:05.822 "blocks": 20480, 00:16:05.822 "percent": 32 00:16:05.822 } 00:16:05.822 }, 00:16:05.822 "base_bdevs_list": [ 00:16:05.822 { 00:16:05.822 "name": "spare", 00:16:05.822 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:16:05.822 "is_configured": true, 00:16:05.822 "data_offset": 2048, 00:16:05.822 "data_size": 63488 00:16:05.822 }, 00:16:05.822 { 00:16:05.822 "name": null, 00:16:05.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.822 "is_configured": false, 00:16:05.822 "data_offset": 2048, 00:16:05.822 "data_size": 63488 00:16:05.822 }, 00:16:05.822 { 00:16:05.822 "name": "BaseBdev3", 00:16:05.822 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:05.822 "is_configured": true, 00:16:05.822 "data_offset": 2048, 00:16:05.823 "data_size": 63488 00:16:05.823 }, 00:16:05.823 { 00:16:05.823 "name": "BaseBdev4", 00:16:05.823 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:05.823 "is_configured": true, 00:16:05.823 "data_offset": 2048, 00:16:05.823 "data_size": 63488 00:16:05.823 } 00:16:05.823 ] 00:16:05.823 }' 00:16:05.823 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.823 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.823 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.823 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.823 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:05.823 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.823 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.823 [2024-11-05 03:27:19.454215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.081 [2024-11-05 03:27:19.481092] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.081 [2024-11-05 03:27:19.481158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.081 [2024-11-05 03:27:19.481187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.081 [2024-11-05 03:27:19.481197] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.081 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.081 "name": "raid_bdev1", 00:16:06.081 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:06.081 "strip_size_kb": 0, 00:16:06.081 "state": "online", 00:16:06.081 "raid_level": "raid1", 00:16:06.081 "superblock": true, 00:16:06.081 "num_base_bdevs": 4, 00:16:06.081 "num_base_bdevs_discovered": 2, 00:16:06.081 "num_base_bdevs_operational": 2, 00:16:06.081 "base_bdevs_list": [ 00:16:06.081 { 00:16:06.081 "name": null, 00:16:06.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.082 "is_configured": false, 00:16:06.082 "data_offset": 0, 00:16:06.082 "data_size": 63488 00:16:06.082 }, 00:16:06.082 { 00:16:06.082 "name": null, 00:16:06.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.082 "is_configured": false, 00:16:06.082 "data_offset": 2048, 00:16:06.082 "data_size": 63488 00:16:06.082 }, 00:16:06.082 { 00:16:06.082 "name": "BaseBdev3", 00:16:06.082 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:06.082 "is_configured": true, 00:16:06.082 "data_offset": 2048, 00:16:06.082 "data_size": 63488 00:16:06.082 }, 00:16:06.082 { 00:16:06.082 "name": "BaseBdev4", 00:16:06.082 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:06.082 "is_configured": true, 00:16:06.082 "data_offset": 2048, 00:16:06.082 "data_size": 63488 00:16:06.082 } 00:16:06.082 ] 00:16:06.082 }' 00:16:06.082 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.082 03:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.647 03:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.647 03:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.647 03:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.647 [2024-11-05 03:27:20.054188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.647 [2024-11-05 03:27:20.054267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.647 [2024-11-05 03:27:20.054304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:06.647 [2024-11-05 03:27:20.054352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.647 [2024-11-05 03:27:20.054969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.647 [2024-11-05 03:27:20.055007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.647 [2024-11-05 03:27:20.055149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.647 [2024-11-05 03:27:20.055167] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:06.647 [2024-11-05 03:27:20.055184] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:06.647 [2024-11-05 03:27:20.055215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.647 [2024-11-05 03:27:20.069114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:06.647 spare 00:16:06.647 03:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.647 03:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:06.647 [2024-11-05 03:27:20.071740] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.581 "name": "raid_bdev1", 00:16:07.581 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:07.581 "strip_size_kb": 0, 00:16:07.581 "state": "online", 00:16:07.581 "raid_level": "raid1", 00:16:07.581 "superblock": true, 00:16:07.581 "num_base_bdevs": 4, 00:16:07.581 "num_base_bdevs_discovered": 3, 00:16:07.581 "num_base_bdevs_operational": 3, 00:16:07.581 "process": { 00:16:07.581 "type": "rebuild", 00:16:07.581 "target": "spare", 00:16:07.581 "progress": { 00:16:07.581 "blocks": 20480, 00:16:07.581 "percent": 32 00:16:07.581 } 00:16:07.581 }, 00:16:07.581 "base_bdevs_list": [ 00:16:07.581 { 00:16:07.581 "name": "spare", 00:16:07.581 "uuid": "166ef83a-acd1-5c8f-9df1-2c6ef33f5be2", 00:16:07.581 "is_configured": true, 00:16:07.581 "data_offset": 2048, 00:16:07.581 "data_size": 63488 00:16:07.581 }, 00:16:07.581 { 00:16:07.581 "name": null, 00:16:07.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.581 "is_configured": false, 00:16:07.581 "data_offset": 2048, 00:16:07.581 "data_size": 63488 00:16:07.581 }, 00:16:07.581 { 00:16:07.581 "name": "BaseBdev3", 00:16:07.581 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:07.581 "is_configured": true, 00:16:07.581 "data_offset": 2048, 00:16:07.581 "data_size": 63488 00:16:07.581 }, 00:16:07.581 { 00:16:07.581 "name": "BaseBdev4", 00:16:07.581 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:07.581 "is_configured": true, 00:16:07.581 "data_offset": 2048, 00:16:07.581 "data_size": 63488 00:16:07.581 } 00:16:07.581 ] 00:16:07.581 }' 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.581 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.840 [2024-11-05 03:27:21.245059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.840 [2024-11-05 03:27:21.280983] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.840 [2024-11-05 03:27:21.281062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.840 [2024-11-05 03:27:21.281085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.840 [2024-11-05 03:27:21.281099] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.840 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.841 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.841 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.841 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.841 "name": "raid_bdev1", 00:16:07.841 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:07.841 "strip_size_kb": 0, 00:16:07.841 "state": "online", 00:16:07.841 "raid_level": "raid1", 00:16:07.841 "superblock": true, 00:16:07.841 "num_base_bdevs": 4, 00:16:07.841 "num_base_bdevs_discovered": 2, 00:16:07.841 "num_base_bdevs_operational": 2, 00:16:07.841 "base_bdevs_list": [ 00:16:07.841 { 00:16:07.841 "name": null, 00:16:07.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.841 "is_configured": false, 00:16:07.841 "data_offset": 0, 00:16:07.841 "data_size": 63488 00:16:07.841 }, 00:16:07.841 { 00:16:07.841 "name": null, 00:16:07.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.841 "is_configured": false, 00:16:07.841 "data_offset": 2048, 00:16:07.841 "data_size": 63488 00:16:07.841 }, 00:16:07.841 { 00:16:07.841 "name": "BaseBdev3", 00:16:07.841 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:07.841 "is_configured": true, 00:16:07.841 "data_offset": 2048, 00:16:07.841 "data_size": 63488 00:16:07.841 }, 00:16:07.841 { 00:16:07.841 "name": "BaseBdev4", 00:16:07.841 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:07.841 "is_configured": true, 00:16:07.841 "data_offset": 2048, 00:16:07.841 "data_size": 63488 00:16:07.841 } 00:16:07.841 ] 00:16:07.841 }' 00:16:07.841 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.841 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.407 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.407 "name": "raid_bdev1", 00:16:08.407 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:08.407 "strip_size_kb": 0, 00:16:08.407 "state": "online", 00:16:08.407 "raid_level": "raid1", 00:16:08.407 "superblock": true, 00:16:08.407 "num_base_bdevs": 4, 00:16:08.408 "num_base_bdevs_discovered": 2, 00:16:08.408 "num_base_bdevs_operational": 2, 00:16:08.408 "base_bdevs_list": [ 00:16:08.408 { 00:16:08.408 "name": null, 00:16:08.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.408 "is_configured": false, 00:16:08.408 "data_offset": 0, 00:16:08.408 "data_size": 63488 00:16:08.408 }, 00:16:08.408 { 00:16:08.408 "name": null, 00:16:08.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.408 "is_configured": false, 00:16:08.408 "data_offset": 2048, 00:16:08.408 "data_size": 63488 00:16:08.408 }, 00:16:08.408 { 00:16:08.408 "name": "BaseBdev3", 00:16:08.408 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:08.408 "is_configured": true, 00:16:08.408 "data_offset": 2048, 00:16:08.408 "data_size": 63488 00:16:08.408 }, 00:16:08.408 { 00:16:08.408 "name": "BaseBdev4", 00:16:08.408 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:08.408 "is_configured": true, 00:16:08.408 "data_offset": 2048, 00:16:08.408 "data_size": 63488 00:16:08.408 } 00:16:08.408 ] 00:16:08.408 }' 00:16:08.408 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.408 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.408 03:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.408 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.408 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:08.408 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.408 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.666 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.666 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:08.666 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.666 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.666 [2024-11-05 03:27:22.054778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:08.666 [2024-11-05 03:27:22.054843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.666 [2024-11-05 03:27:22.054880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:08.666 [2024-11-05 03:27:22.054897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.666 [2024-11-05 03:27:22.055480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.666 [2024-11-05 03:27:22.055520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.666 [2024-11-05 03:27:22.055630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:08.666 [2024-11-05 03:27:22.055677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:08.666 [2024-11-05 03:27:22.055689] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:08.666 [2024-11-05 03:27:22.055721] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:08.666 BaseBdev1 00:16:08.666 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.666 03:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.600 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.600 "name": "raid_bdev1", 00:16:09.600 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:09.600 "strip_size_kb": 0, 00:16:09.600 "state": "online", 00:16:09.600 "raid_level": "raid1", 00:16:09.600 "superblock": true, 00:16:09.600 "num_base_bdevs": 4, 00:16:09.600 "num_base_bdevs_discovered": 2, 00:16:09.600 "num_base_bdevs_operational": 2, 00:16:09.600 "base_bdevs_list": [ 00:16:09.600 { 00:16:09.600 "name": null, 00:16:09.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.600 "is_configured": false, 00:16:09.600 "data_offset": 0, 00:16:09.600 "data_size": 63488 00:16:09.600 }, 00:16:09.600 { 00:16:09.600 "name": null, 00:16:09.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.600 "is_configured": false, 00:16:09.600 "data_offset": 2048, 00:16:09.600 "data_size": 63488 00:16:09.600 }, 00:16:09.600 { 00:16:09.600 "name": "BaseBdev3", 00:16:09.600 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:09.600 "is_configured": true, 00:16:09.600 "data_offset": 2048, 00:16:09.600 "data_size": 63488 00:16:09.600 }, 00:16:09.600 { 00:16:09.600 "name": "BaseBdev4", 00:16:09.600 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:09.600 "is_configured": true, 00:16:09.601 "data_offset": 2048, 00:16:09.601 "data_size": 63488 00:16:09.601 } 00:16:09.601 ] 00:16:09.601 }' 00:16:09.601 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.601 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.167 "name": "raid_bdev1", 00:16:10.167 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:10.167 "strip_size_kb": 0, 00:16:10.167 "state": "online", 00:16:10.167 "raid_level": "raid1", 00:16:10.167 "superblock": true, 00:16:10.167 "num_base_bdevs": 4, 00:16:10.167 "num_base_bdevs_discovered": 2, 00:16:10.167 "num_base_bdevs_operational": 2, 00:16:10.167 "base_bdevs_list": [ 00:16:10.167 { 00:16:10.167 "name": null, 00:16:10.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.167 "is_configured": false, 00:16:10.167 "data_offset": 0, 00:16:10.167 "data_size": 63488 00:16:10.167 }, 00:16:10.167 { 00:16:10.167 "name": null, 00:16:10.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.167 "is_configured": false, 00:16:10.167 "data_offset": 2048, 00:16:10.167 "data_size": 63488 00:16:10.167 }, 00:16:10.167 { 00:16:10.167 "name": "BaseBdev3", 00:16:10.167 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:10.167 "is_configured": true, 00:16:10.167 "data_offset": 2048, 00:16:10.167 "data_size": 63488 00:16:10.167 }, 00:16:10.167 { 00:16:10.167 "name": "BaseBdev4", 00:16:10.167 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:10.167 "is_configured": true, 00:16:10.167 "data_offset": 2048, 00:16:10.167 "data_size": 63488 00:16:10.167 } 00:16:10.167 ] 00:16:10.167 }' 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.167 [2024-11-05 03:27:23.783603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.167 [2024-11-05 03:27:23.783917] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:10.167 [2024-11-05 03:27:23.783946] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:10.167 request: 00:16:10.167 { 00:16:10.167 "base_bdev": "BaseBdev1", 00:16:10.167 "raid_bdev": "raid_bdev1", 00:16:10.167 "method": "bdev_raid_add_base_bdev", 00:16:10.167 "req_id": 1 00:16:10.167 } 00:16:10.167 Got JSON-RPC error response 00:16:10.167 response: 00:16:10.167 { 00:16:10.167 "code": -22, 00:16:10.167 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:10.167 } 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.167 03:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:11.542 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.542 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.542 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.542 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.542 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.543 "name": "raid_bdev1", 00:16:11.543 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:11.543 "strip_size_kb": 0, 00:16:11.543 "state": "online", 00:16:11.543 "raid_level": "raid1", 00:16:11.543 "superblock": true, 00:16:11.543 "num_base_bdevs": 4, 00:16:11.543 "num_base_bdevs_discovered": 2, 00:16:11.543 "num_base_bdevs_operational": 2, 00:16:11.543 "base_bdevs_list": [ 00:16:11.543 { 00:16:11.543 "name": null, 00:16:11.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.543 "is_configured": false, 00:16:11.543 "data_offset": 0, 00:16:11.543 "data_size": 63488 00:16:11.543 }, 00:16:11.543 { 00:16:11.543 "name": null, 00:16:11.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.543 "is_configured": false, 00:16:11.543 "data_offset": 2048, 00:16:11.543 "data_size": 63488 00:16:11.543 }, 00:16:11.543 { 00:16:11.543 "name": "BaseBdev3", 00:16:11.543 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:11.543 "is_configured": true, 00:16:11.543 "data_offset": 2048, 00:16:11.543 "data_size": 63488 00:16:11.543 }, 00:16:11.543 { 00:16:11.543 "name": "BaseBdev4", 00:16:11.543 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:11.543 "is_configured": true, 00:16:11.543 "data_offset": 2048, 00:16:11.543 "data_size": 63488 00:16:11.543 } 00:16:11.543 ] 00:16:11.543 }' 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.543 03:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.801 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.801 "name": "raid_bdev1", 00:16:11.801 "uuid": "fd45445c-b0f8-4802-a8b4-289107969282", 00:16:11.801 "strip_size_kb": 0, 00:16:11.801 "state": "online", 00:16:11.801 "raid_level": "raid1", 00:16:11.801 "superblock": true, 00:16:11.801 "num_base_bdevs": 4, 00:16:11.801 "num_base_bdevs_discovered": 2, 00:16:11.801 "num_base_bdevs_operational": 2, 00:16:11.801 "base_bdevs_list": [ 00:16:11.802 { 00:16:11.802 "name": null, 00:16:11.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.802 "is_configured": false, 00:16:11.802 "data_offset": 0, 00:16:11.802 "data_size": 63488 00:16:11.802 }, 00:16:11.802 { 00:16:11.802 "name": null, 00:16:11.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.802 "is_configured": false, 00:16:11.802 "data_offset": 2048, 00:16:11.802 "data_size": 63488 00:16:11.802 }, 00:16:11.802 { 00:16:11.802 "name": "BaseBdev3", 00:16:11.802 "uuid": "a9eb3604-ee3d-5136-9a40-75a3cad07a28", 00:16:11.802 "is_configured": true, 00:16:11.802 "data_offset": 2048, 00:16:11.802 "data_size": 63488 00:16:11.802 }, 00:16:11.802 { 00:16:11.802 "name": "BaseBdev4", 00:16:11.802 "uuid": "d1ec7ab8-dfb0-576d-bf6c-08dc42aa84a7", 00:16:11.802 "is_configured": true, 00:16:11.802 "data_offset": 2048, 00:16:11.802 "data_size": 63488 00:16:11.802 } 00:16:11.802 ] 00:16:11.802 }' 00:16:11.802 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79230 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79230 ']' 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79230 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79230 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:12.061 killing process with pid 79230 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79230' 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79230 00:16:12.061 Received shutdown signal, test time was about 19.523804 seconds 00:16:12.061 00:16:12.061 Latency(us) 00:16:12.061 [2024-11-05T03:27:25.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.061 [2024-11-05T03:27:25.700Z] =================================================================================================================== 00:16:12.061 [2024-11-05T03:27:25.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.061 [2024-11-05 03:27:25.546985] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.061 03:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79230 00:16:12.061 [2024-11-05 03:27:25.547130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.061 [2024-11-05 03:27:25.547227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.061 [2024-11-05 03:27:25.547244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:12.319 [2024-11-05 03:27:25.899127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.695 03:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:13.695 00:16:13.695 real 0m23.138s 00:16:13.695 user 0m31.776s 00:16:13.695 sys 0m2.365s 00:16:13.695 03:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.695 03:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 ************************************ 00:16:13.695 END TEST raid_rebuild_test_sb_io 00:16:13.695 ************************************ 00:16:13.695 03:27:26 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:13.695 03:27:26 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:13.695 03:27:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:13.695 03:27:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:13.695 03:27:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 ************************************ 00:16:13.695 START TEST raid5f_state_function_test 00:16:13.695 ************************************ 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79963 00:16:13.695 Process raid pid: 79963 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79963' 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79963 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 79963 ']' 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:13.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:13.695 03:27:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 [2024-11-05 03:27:27.098705] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:16:13.695 [2024-11-05 03:27:27.098903] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.695 [2024-11-05 03:27:27.286489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.953 [2024-11-05 03:27:27.417544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.212 [2024-11-05 03:27:27.617858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.212 [2024-11-05 03:27:27.617933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.471 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:14.471 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:14.471 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:14.471 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.471 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.471 [2024-11-05 03:27:28.101072] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.471 [2024-11-05 03:27:28.101143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.471 [2024-11-05 03:27:28.101160] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.471 [2024-11-05 03:27:28.101176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.471 [2024-11-05 03:27:28.101187] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.471 [2024-11-05 03:27:28.101201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.731 "name": "Existed_Raid", 00:16:14.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.731 "strip_size_kb": 64, 00:16:14.731 "state": "configuring", 00:16:14.731 "raid_level": "raid5f", 00:16:14.731 "superblock": false, 00:16:14.731 "num_base_bdevs": 3, 00:16:14.731 "num_base_bdevs_discovered": 0, 00:16:14.731 "num_base_bdevs_operational": 3, 00:16:14.731 "base_bdevs_list": [ 00:16:14.731 { 00:16:14.731 "name": "BaseBdev1", 00:16:14.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.731 "is_configured": false, 00:16:14.731 "data_offset": 0, 00:16:14.731 "data_size": 0 00:16:14.731 }, 00:16:14.731 { 00:16:14.731 "name": "BaseBdev2", 00:16:14.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.731 "is_configured": false, 00:16:14.731 "data_offset": 0, 00:16:14.731 "data_size": 0 00:16:14.731 }, 00:16:14.731 { 00:16:14.731 "name": "BaseBdev3", 00:16:14.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.731 "is_configured": false, 00:16:14.731 "data_offset": 0, 00:16:14.731 "data_size": 0 00:16:14.731 } 00:16:14.731 ] 00:16:14.731 }' 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.731 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.298 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:15.298 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.299 [2024-11-05 03:27:28.661205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.299 [2024-11-05 03:27:28.661423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.299 [2024-11-05 03:27:28.669194] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.299 [2024-11-05 03:27:28.669293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.299 [2024-11-05 03:27:28.669336] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.299 [2024-11-05 03:27:28.669366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.299 [2024-11-05 03:27:28.669378] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.299 [2024-11-05 03:27:28.669393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.299 [2024-11-05 03:27:28.714570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.299 BaseBdev1 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.299 [ 00:16:15.299 { 00:16:15.299 "name": "BaseBdev1", 00:16:15.299 "aliases": [ 00:16:15.299 "cbda9877-f262-4e9f-bac5-83a7fcc0f798" 00:16:15.299 ], 00:16:15.299 "product_name": "Malloc disk", 00:16:15.299 "block_size": 512, 00:16:15.299 "num_blocks": 65536, 00:16:15.299 "uuid": "cbda9877-f262-4e9f-bac5-83a7fcc0f798", 00:16:15.299 "assigned_rate_limits": { 00:16:15.299 "rw_ios_per_sec": 0, 00:16:15.299 "rw_mbytes_per_sec": 0, 00:16:15.299 "r_mbytes_per_sec": 0, 00:16:15.299 "w_mbytes_per_sec": 0 00:16:15.299 }, 00:16:15.299 "claimed": true, 00:16:15.299 "claim_type": "exclusive_write", 00:16:15.299 "zoned": false, 00:16:15.299 "supported_io_types": { 00:16:15.299 "read": true, 00:16:15.299 "write": true, 00:16:15.299 "unmap": true, 00:16:15.299 "flush": true, 00:16:15.299 "reset": true, 00:16:15.299 "nvme_admin": false, 00:16:15.299 "nvme_io": false, 00:16:15.299 "nvme_io_md": false, 00:16:15.299 "write_zeroes": true, 00:16:15.299 "zcopy": true, 00:16:15.299 "get_zone_info": false, 00:16:15.299 "zone_management": false, 00:16:15.299 "zone_append": false, 00:16:15.299 "compare": false, 00:16:15.299 "compare_and_write": false, 00:16:15.299 "abort": true, 00:16:15.299 "seek_hole": false, 00:16:15.299 "seek_data": false, 00:16:15.299 "copy": true, 00:16:15.299 "nvme_iov_md": false 00:16:15.299 }, 00:16:15.299 "memory_domains": [ 00:16:15.299 { 00:16:15.299 "dma_device_id": "system", 00:16:15.299 "dma_device_type": 1 00:16:15.299 }, 00:16:15.299 { 00:16:15.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.299 "dma_device_type": 2 00:16:15.299 } 00:16:15.299 ], 00:16:15.299 "driver_specific": {} 00:16:15.299 } 00:16:15.299 ] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.299 "name": "Existed_Raid", 00:16:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.299 "strip_size_kb": 64, 00:16:15.299 "state": "configuring", 00:16:15.299 "raid_level": "raid5f", 00:16:15.299 "superblock": false, 00:16:15.299 "num_base_bdevs": 3, 00:16:15.299 "num_base_bdevs_discovered": 1, 00:16:15.299 "num_base_bdevs_operational": 3, 00:16:15.299 "base_bdevs_list": [ 00:16:15.299 { 00:16:15.299 "name": "BaseBdev1", 00:16:15.299 "uuid": "cbda9877-f262-4e9f-bac5-83a7fcc0f798", 00:16:15.299 "is_configured": true, 00:16:15.299 "data_offset": 0, 00:16:15.299 "data_size": 65536 00:16:15.299 }, 00:16:15.299 { 00:16:15.299 "name": "BaseBdev2", 00:16:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.299 "is_configured": false, 00:16:15.299 "data_offset": 0, 00:16:15.299 "data_size": 0 00:16:15.299 }, 00:16:15.299 { 00:16:15.299 "name": "BaseBdev3", 00:16:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.299 "is_configured": false, 00:16:15.299 "data_offset": 0, 00:16:15.299 "data_size": 0 00:16:15.299 } 00:16:15.299 ] 00:16:15.299 }' 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.299 03:27:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.867 [2024-11-05 03:27:29.302847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.867 [2024-11-05 03:27:29.303074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.867 [2024-11-05 03:27:29.310893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.867 [2024-11-05 03:27:29.313293] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.867 [2024-11-05 03:27:29.313554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.867 [2024-11-05 03:27:29.313583] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.867 [2024-11-05 03:27:29.313602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.867 "name": "Existed_Raid", 00:16:15.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.867 "strip_size_kb": 64, 00:16:15.867 "state": "configuring", 00:16:15.867 "raid_level": "raid5f", 00:16:15.867 "superblock": false, 00:16:15.867 "num_base_bdevs": 3, 00:16:15.867 "num_base_bdevs_discovered": 1, 00:16:15.867 "num_base_bdevs_operational": 3, 00:16:15.867 "base_bdevs_list": [ 00:16:15.867 { 00:16:15.867 "name": "BaseBdev1", 00:16:15.867 "uuid": "cbda9877-f262-4e9f-bac5-83a7fcc0f798", 00:16:15.867 "is_configured": true, 00:16:15.867 "data_offset": 0, 00:16:15.867 "data_size": 65536 00:16:15.867 }, 00:16:15.867 { 00:16:15.867 "name": "BaseBdev2", 00:16:15.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.867 "is_configured": false, 00:16:15.867 "data_offset": 0, 00:16:15.867 "data_size": 0 00:16:15.867 }, 00:16:15.867 { 00:16:15.867 "name": "BaseBdev3", 00:16:15.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.867 "is_configured": false, 00:16:15.867 "data_offset": 0, 00:16:15.867 "data_size": 0 00:16:15.867 } 00:16:15.867 ] 00:16:15.867 }' 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.867 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.438 [2024-11-05 03:27:29.895124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.438 BaseBdev2 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.438 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.438 [ 00:16:16.438 { 00:16:16.438 "name": "BaseBdev2", 00:16:16.438 "aliases": [ 00:16:16.438 "d980dc02-77b3-4051-8163-cd3686b74424" 00:16:16.438 ], 00:16:16.438 "product_name": "Malloc disk", 00:16:16.438 "block_size": 512, 00:16:16.438 "num_blocks": 65536, 00:16:16.438 "uuid": "d980dc02-77b3-4051-8163-cd3686b74424", 00:16:16.438 "assigned_rate_limits": { 00:16:16.438 "rw_ios_per_sec": 0, 00:16:16.438 "rw_mbytes_per_sec": 0, 00:16:16.438 "r_mbytes_per_sec": 0, 00:16:16.438 "w_mbytes_per_sec": 0 00:16:16.438 }, 00:16:16.438 "claimed": true, 00:16:16.438 "claim_type": "exclusive_write", 00:16:16.438 "zoned": false, 00:16:16.438 "supported_io_types": { 00:16:16.439 "read": true, 00:16:16.439 "write": true, 00:16:16.439 "unmap": true, 00:16:16.439 "flush": true, 00:16:16.439 "reset": true, 00:16:16.439 "nvme_admin": false, 00:16:16.439 "nvme_io": false, 00:16:16.439 "nvme_io_md": false, 00:16:16.439 "write_zeroes": true, 00:16:16.439 "zcopy": true, 00:16:16.439 "get_zone_info": false, 00:16:16.439 "zone_management": false, 00:16:16.439 "zone_append": false, 00:16:16.439 "compare": false, 00:16:16.439 "compare_and_write": false, 00:16:16.439 "abort": true, 00:16:16.439 "seek_hole": false, 00:16:16.439 "seek_data": false, 00:16:16.439 "copy": true, 00:16:16.439 "nvme_iov_md": false 00:16:16.439 }, 00:16:16.439 "memory_domains": [ 00:16:16.439 { 00:16:16.439 "dma_device_id": "system", 00:16:16.439 "dma_device_type": 1 00:16:16.439 }, 00:16:16.439 { 00:16:16.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.439 "dma_device_type": 2 00:16:16.439 } 00:16:16.439 ], 00:16:16.439 "driver_specific": {} 00:16:16.439 } 00:16:16.439 ] 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.439 "name": "Existed_Raid", 00:16:16.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.439 "strip_size_kb": 64, 00:16:16.439 "state": "configuring", 00:16:16.439 "raid_level": "raid5f", 00:16:16.439 "superblock": false, 00:16:16.439 "num_base_bdevs": 3, 00:16:16.439 "num_base_bdevs_discovered": 2, 00:16:16.439 "num_base_bdevs_operational": 3, 00:16:16.439 "base_bdevs_list": [ 00:16:16.439 { 00:16:16.439 "name": "BaseBdev1", 00:16:16.439 "uuid": "cbda9877-f262-4e9f-bac5-83a7fcc0f798", 00:16:16.439 "is_configured": true, 00:16:16.439 "data_offset": 0, 00:16:16.439 "data_size": 65536 00:16:16.439 }, 00:16:16.439 { 00:16:16.439 "name": "BaseBdev2", 00:16:16.439 "uuid": "d980dc02-77b3-4051-8163-cd3686b74424", 00:16:16.439 "is_configured": true, 00:16:16.439 "data_offset": 0, 00:16:16.439 "data_size": 65536 00:16:16.439 }, 00:16:16.439 { 00:16:16.439 "name": "BaseBdev3", 00:16:16.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.439 "is_configured": false, 00:16:16.439 "data_offset": 0, 00:16:16.439 "data_size": 0 00:16:16.439 } 00:16:16.439 ] 00:16:16.439 }' 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.439 03:27:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.006 [2024-11-05 03:27:30.536273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.006 [2024-11-05 03:27:30.536408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:17.006 [2024-11-05 03:27:30.536433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:17.006 [2024-11-05 03:27:30.536805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:17.006 [2024-11-05 03:27:30.541913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:17.006 [2024-11-05 03:27:30.541939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:17.006 [2024-11-05 03:27:30.542298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.006 BaseBdev3 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.006 [ 00:16:17.006 { 00:16:17.006 "name": "BaseBdev3", 00:16:17.006 "aliases": [ 00:16:17.006 "1db41ee8-579a-4558-af0c-7aa99aa6a825" 00:16:17.006 ], 00:16:17.006 "product_name": "Malloc disk", 00:16:17.006 "block_size": 512, 00:16:17.006 "num_blocks": 65536, 00:16:17.006 "uuid": "1db41ee8-579a-4558-af0c-7aa99aa6a825", 00:16:17.006 "assigned_rate_limits": { 00:16:17.006 "rw_ios_per_sec": 0, 00:16:17.006 "rw_mbytes_per_sec": 0, 00:16:17.006 "r_mbytes_per_sec": 0, 00:16:17.006 "w_mbytes_per_sec": 0 00:16:17.006 }, 00:16:17.006 "claimed": true, 00:16:17.006 "claim_type": "exclusive_write", 00:16:17.006 "zoned": false, 00:16:17.006 "supported_io_types": { 00:16:17.006 "read": true, 00:16:17.006 "write": true, 00:16:17.006 "unmap": true, 00:16:17.006 "flush": true, 00:16:17.006 "reset": true, 00:16:17.006 "nvme_admin": false, 00:16:17.006 "nvme_io": false, 00:16:17.006 "nvme_io_md": false, 00:16:17.006 "write_zeroes": true, 00:16:17.006 "zcopy": true, 00:16:17.006 "get_zone_info": false, 00:16:17.006 "zone_management": false, 00:16:17.006 "zone_append": false, 00:16:17.006 "compare": false, 00:16:17.006 "compare_and_write": false, 00:16:17.006 "abort": true, 00:16:17.006 "seek_hole": false, 00:16:17.006 "seek_data": false, 00:16:17.006 "copy": true, 00:16:17.006 "nvme_iov_md": false 00:16:17.006 }, 00:16:17.006 "memory_domains": [ 00:16:17.006 { 00:16:17.006 "dma_device_id": "system", 00:16:17.006 "dma_device_type": 1 00:16:17.006 }, 00:16:17.006 { 00:16:17.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.006 "dma_device_type": 2 00:16:17.006 } 00:16:17.006 ], 00:16:17.006 "driver_specific": {} 00:16:17.006 } 00:16:17.006 ] 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.006 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.007 "name": "Existed_Raid", 00:16:17.007 "uuid": "5fd16156-73bb-41ba-9b9a-014ea0e9bafb", 00:16:17.007 "strip_size_kb": 64, 00:16:17.007 "state": "online", 00:16:17.007 "raid_level": "raid5f", 00:16:17.007 "superblock": false, 00:16:17.007 "num_base_bdevs": 3, 00:16:17.007 "num_base_bdevs_discovered": 3, 00:16:17.007 "num_base_bdevs_operational": 3, 00:16:17.007 "base_bdevs_list": [ 00:16:17.007 { 00:16:17.007 "name": "BaseBdev1", 00:16:17.007 "uuid": "cbda9877-f262-4e9f-bac5-83a7fcc0f798", 00:16:17.007 "is_configured": true, 00:16:17.007 "data_offset": 0, 00:16:17.007 "data_size": 65536 00:16:17.007 }, 00:16:17.007 { 00:16:17.007 "name": "BaseBdev2", 00:16:17.007 "uuid": "d980dc02-77b3-4051-8163-cd3686b74424", 00:16:17.007 "is_configured": true, 00:16:17.007 "data_offset": 0, 00:16:17.007 "data_size": 65536 00:16:17.007 }, 00:16:17.007 { 00:16:17.007 "name": "BaseBdev3", 00:16:17.007 "uuid": "1db41ee8-579a-4558-af0c-7aa99aa6a825", 00:16:17.007 "is_configured": true, 00:16:17.007 "data_offset": 0, 00:16:17.007 "data_size": 65536 00:16:17.007 } 00:16:17.007 ] 00:16:17.007 }' 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.007 03:27:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.573 [2024-11-05 03:27:31.108352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.573 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.573 "name": "Existed_Raid", 00:16:17.573 "aliases": [ 00:16:17.573 "5fd16156-73bb-41ba-9b9a-014ea0e9bafb" 00:16:17.573 ], 00:16:17.573 "product_name": "Raid Volume", 00:16:17.573 "block_size": 512, 00:16:17.573 "num_blocks": 131072, 00:16:17.573 "uuid": "5fd16156-73bb-41ba-9b9a-014ea0e9bafb", 00:16:17.573 "assigned_rate_limits": { 00:16:17.573 "rw_ios_per_sec": 0, 00:16:17.573 "rw_mbytes_per_sec": 0, 00:16:17.573 "r_mbytes_per_sec": 0, 00:16:17.574 "w_mbytes_per_sec": 0 00:16:17.574 }, 00:16:17.574 "claimed": false, 00:16:17.574 "zoned": false, 00:16:17.574 "supported_io_types": { 00:16:17.574 "read": true, 00:16:17.574 "write": true, 00:16:17.574 "unmap": false, 00:16:17.574 "flush": false, 00:16:17.574 "reset": true, 00:16:17.574 "nvme_admin": false, 00:16:17.574 "nvme_io": false, 00:16:17.574 "nvme_io_md": false, 00:16:17.574 "write_zeroes": true, 00:16:17.574 "zcopy": false, 00:16:17.574 "get_zone_info": false, 00:16:17.574 "zone_management": false, 00:16:17.574 "zone_append": false, 00:16:17.574 "compare": false, 00:16:17.574 "compare_and_write": false, 00:16:17.574 "abort": false, 00:16:17.574 "seek_hole": false, 00:16:17.574 "seek_data": false, 00:16:17.574 "copy": false, 00:16:17.574 "nvme_iov_md": false 00:16:17.574 }, 00:16:17.574 "driver_specific": { 00:16:17.574 "raid": { 00:16:17.574 "uuid": "5fd16156-73bb-41ba-9b9a-014ea0e9bafb", 00:16:17.574 "strip_size_kb": 64, 00:16:17.574 "state": "online", 00:16:17.574 "raid_level": "raid5f", 00:16:17.574 "superblock": false, 00:16:17.574 "num_base_bdevs": 3, 00:16:17.574 "num_base_bdevs_discovered": 3, 00:16:17.574 "num_base_bdevs_operational": 3, 00:16:17.574 "base_bdevs_list": [ 00:16:17.574 { 00:16:17.574 "name": "BaseBdev1", 00:16:17.574 "uuid": "cbda9877-f262-4e9f-bac5-83a7fcc0f798", 00:16:17.574 "is_configured": true, 00:16:17.574 "data_offset": 0, 00:16:17.574 "data_size": 65536 00:16:17.574 }, 00:16:17.574 { 00:16:17.574 "name": "BaseBdev2", 00:16:17.574 "uuid": "d980dc02-77b3-4051-8163-cd3686b74424", 00:16:17.574 "is_configured": true, 00:16:17.574 "data_offset": 0, 00:16:17.574 "data_size": 65536 00:16:17.574 }, 00:16:17.574 { 00:16:17.574 "name": "BaseBdev3", 00:16:17.574 "uuid": "1db41ee8-579a-4558-af0c-7aa99aa6a825", 00:16:17.574 "is_configured": true, 00:16:17.574 "data_offset": 0, 00:16:17.574 "data_size": 65536 00:16:17.574 } 00:16:17.574 ] 00:16:17.574 } 00:16:17.574 } 00:16:17.574 }' 00:16:17.574 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.832 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:17.832 BaseBdev2 00:16:17.832 BaseBdev3' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 [2024-11-05 03:27:31.444222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.092 "name": "Existed_Raid", 00:16:18.092 "uuid": "5fd16156-73bb-41ba-9b9a-014ea0e9bafb", 00:16:18.092 "strip_size_kb": 64, 00:16:18.092 "state": "online", 00:16:18.092 "raid_level": "raid5f", 00:16:18.092 "superblock": false, 00:16:18.092 "num_base_bdevs": 3, 00:16:18.092 "num_base_bdevs_discovered": 2, 00:16:18.092 "num_base_bdevs_operational": 2, 00:16:18.092 "base_bdevs_list": [ 00:16:18.092 { 00:16:18.092 "name": null, 00:16:18.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.092 "is_configured": false, 00:16:18.092 "data_offset": 0, 00:16:18.092 "data_size": 65536 00:16:18.092 }, 00:16:18.092 { 00:16:18.092 "name": "BaseBdev2", 00:16:18.092 "uuid": "d980dc02-77b3-4051-8163-cd3686b74424", 00:16:18.092 "is_configured": true, 00:16:18.092 "data_offset": 0, 00:16:18.092 "data_size": 65536 00:16:18.092 }, 00:16:18.092 { 00:16:18.092 "name": "BaseBdev3", 00:16:18.092 "uuid": "1db41ee8-579a-4558-af0c-7aa99aa6a825", 00:16:18.092 "is_configured": true, 00:16:18.092 "data_offset": 0, 00:16:18.092 "data_size": 65536 00:16:18.092 } 00:16:18.092 ] 00:16:18.092 }' 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.092 03:27:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.658 [2024-11-05 03:27:32.129152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.658 [2024-11-05 03:27:32.129271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.658 [2024-11-05 03:27:32.212881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.658 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.659 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.659 [2024-11-05 03:27:32.276962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:18.659 [2024-11-05 03:27:32.277017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.918 BaseBdev2 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.918 [ 00:16:18.918 { 00:16:18.918 "name": "BaseBdev2", 00:16:18.918 "aliases": [ 00:16:18.918 "4c5d42db-f5a5-4556-9cca-dc759ec89d0f" 00:16:18.918 ], 00:16:18.918 "product_name": "Malloc disk", 00:16:18.918 "block_size": 512, 00:16:18.918 "num_blocks": 65536, 00:16:18.918 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:18.918 "assigned_rate_limits": { 00:16:18.918 "rw_ios_per_sec": 0, 00:16:18.918 "rw_mbytes_per_sec": 0, 00:16:18.918 "r_mbytes_per_sec": 0, 00:16:18.918 "w_mbytes_per_sec": 0 00:16:18.918 }, 00:16:18.918 "claimed": false, 00:16:18.918 "zoned": false, 00:16:18.918 "supported_io_types": { 00:16:18.918 "read": true, 00:16:18.918 "write": true, 00:16:18.918 "unmap": true, 00:16:18.918 "flush": true, 00:16:18.918 "reset": true, 00:16:18.918 "nvme_admin": false, 00:16:18.918 "nvme_io": false, 00:16:18.918 "nvme_io_md": false, 00:16:18.918 "write_zeroes": true, 00:16:18.918 "zcopy": true, 00:16:18.918 "get_zone_info": false, 00:16:18.918 "zone_management": false, 00:16:18.918 "zone_append": false, 00:16:18.918 "compare": false, 00:16:18.918 "compare_and_write": false, 00:16:18.918 "abort": true, 00:16:18.918 "seek_hole": false, 00:16:18.918 "seek_data": false, 00:16:18.918 "copy": true, 00:16:18.918 "nvme_iov_md": false 00:16:18.918 }, 00:16:18.918 "memory_domains": [ 00:16:18.918 { 00:16:18.918 "dma_device_id": "system", 00:16:18.918 "dma_device_type": 1 00:16:18.918 }, 00:16:18.918 { 00:16:18.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.918 "dma_device_type": 2 00:16:18.918 } 00:16:18.918 ], 00:16:18.918 "driver_specific": {} 00:16:18.918 } 00:16:18.918 ] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.918 BaseBdev3 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.918 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.184 [ 00:16:19.184 { 00:16:19.184 "name": "BaseBdev3", 00:16:19.184 "aliases": [ 00:16:19.184 "01545144-d6e5-40ab-a816-f1dee0baa74a" 00:16:19.184 ], 00:16:19.184 "product_name": "Malloc disk", 00:16:19.184 "block_size": 512, 00:16:19.184 "num_blocks": 65536, 00:16:19.184 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:19.184 "assigned_rate_limits": { 00:16:19.184 "rw_ios_per_sec": 0, 00:16:19.184 "rw_mbytes_per_sec": 0, 00:16:19.184 "r_mbytes_per_sec": 0, 00:16:19.184 "w_mbytes_per_sec": 0 00:16:19.184 }, 00:16:19.184 "claimed": false, 00:16:19.184 "zoned": false, 00:16:19.184 "supported_io_types": { 00:16:19.184 "read": true, 00:16:19.184 "write": true, 00:16:19.184 "unmap": true, 00:16:19.184 "flush": true, 00:16:19.184 "reset": true, 00:16:19.184 "nvme_admin": false, 00:16:19.184 "nvme_io": false, 00:16:19.184 "nvme_io_md": false, 00:16:19.184 "write_zeroes": true, 00:16:19.184 "zcopy": true, 00:16:19.184 "get_zone_info": false, 00:16:19.184 "zone_management": false, 00:16:19.184 "zone_append": false, 00:16:19.184 "compare": false, 00:16:19.184 "compare_and_write": false, 00:16:19.184 "abort": true, 00:16:19.184 "seek_hole": false, 00:16:19.184 "seek_data": false, 00:16:19.184 "copy": true, 00:16:19.184 "nvme_iov_md": false 00:16:19.184 }, 00:16:19.184 "memory_domains": [ 00:16:19.184 { 00:16:19.184 "dma_device_id": "system", 00:16:19.184 "dma_device_type": 1 00:16:19.184 }, 00:16:19.184 { 00:16:19.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.184 "dma_device_type": 2 00:16:19.184 } 00:16:19.184 ], 00:16:19.184 "driver_specific": {} 00:16:19.184 } 00:16:19.184 ] 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.184 [2024-11-05 03:27:32.579287] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.184 [2024-11-05 03:27:32.579515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.184 [2024-11-05 03:27:32.579666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.184 [2024-11-05 03:27:32.582156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.184 "name": "Existed_Raid", 00:16:19.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.184 "strip_size_kb": 64, 00:16:19.184 "state": "configuring", 00:16:19.184 "raid_level": "raid5f", 00:16:19.184 "superblock": false, 00:16:19.184 "num_base_bdevs": 3, 00:16:19.184 "num_base_bdevs_discovered": 2, 00:16:19.184 "num_base_bdevs_operational": 3, 00:16:19.184 "base_bdevs_list": [ 00:16:19.184 { 00:16:19.184 "name": "BaseBdev1", 00:16:19.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.184 "is_configured": false, 00:16:19.184 "data_offset": 0, 00:16:19.184 "data_size": 0 00:16:19.184 }, 00:16:19.184 { 00:16:19.184 "name": "BaseBdev2", 00:16:19.184 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:19.184 "is_configured": true, 00:16:19.184 "data_offset": 0, 00:16:19.184 "data_size": 65536 00:16:19.184 }, 00:16:19.184 { 00:16:19.184 "name": "BaseBdev3", 00:16:19.184 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:19.184 "is_configured": true, 00:16:19.184 "data_offset": 0, 00:16:19.184 "data_size": 65536 00:16:19.184 } 00:16:19.184 ] 00:16:19.184 }' 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.184 03:27:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.767 [2024-11-05 03:27:33.127472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.767 "name": "Existed_Raid", 00:16:19.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.767 "strip_size_kb": 64, 00:16:19.767 "state": "configuring", 00:16:19.767 "raid_level": "raid5f", 00:16:19.767 "superblock": false, 00:16:19.767 "num_base_bdevs": 3, 00:16:19.767 "num_base_bdevs_discovered": 1, 00:16:19.767 "num_base_bdevs_operational": 3, 00:16:19.767 "base_bdevs_list": [ 00:16:19.767 { 00:16:19.767 "name": "BaseBdev1", 00:16:19.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.767 "is_configured": false, 00:16:19.767 "data_offset": 0, 00:16:19.767 "data_size": 0 00:16:19.767 }, 00:16:19.767 { 00:16:19.767 "name": null, 00:16:19.767 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:19.767 "is_configured": false, 00:16:19.767 "data_offset": 0, 00:16:19.767 "data_size": 65536 00:16:19.767 }, 00:16:19.767 { 00:16:19.767 "name": "BaseBdev3", 00:16:19.767 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:19.767 "is_configured": true, 00:16:19.767 "data_offset": 0, 00:16:19.767 "data_size": 65536 00:16:19.767 } 00:16:19.767 ] 00:16:19.767 }' 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.767 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.025 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.026 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.026 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:20.026 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.284 [2024-11-05 03:27:33.749200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.284 BaseBdev1 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:20.284 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.285 [ 00:16:20.285 { 00:16:20.285 "name": "BaseBdev1", 00:16:20.285 "aliases": [ 00:16:20.285 "00b00565-ef5c-4d3e-b39e-781305096088" 00:16:20.285 ], 00:16:20.285 "product_name": "Malloc disk", 00:16:20.285 "block_size": 512, 00:16:20.285 "num_blocks": 65536, 00:16:20.285 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:20.285 "assigned_rate_limits": { 00:16:20.285 "rw_ios_per_sec": 0, 00:16:20.285 "rw_mbytes_per_sec": 0, 00:16:20.285 "r_mbytes_per_sec": 0, 00:16:20.285 "w_mbytes_per_sec": 0 00:16:20.285 }, 00:16:20.285 "claimed": true, 00:16:20.285 "claim_type": "exclusive_write", 00:16:20.285 "zoned": false, 00:16:20.285 "supported_io_types": { 00:16:20.285 "read": true, 00:16:20.285 "write": true, 00:16:20.285 "unmap": true, 00:16:20.285 "flush": true, 00:16:20.285 "reset": true, 00:16:20.285 "nvme_admin": false, 00:16:20.285 "nvme_io": false, 00:16:20.285 "nvme_io_md": false, 00:16:20.285 "write_zeroes": true, 00:16:20.285 "zcopy": true, 00:16:20.285 "get_zone_info": false, 00:16:20.285 "zone_management": false, 00:16:20.285 "zone_append": false, 00:16:20.285 "compare": false, 00:16:20.285 "compare_and_write": false, 00:16:20.285 "abort": true, 00:16:20.285 "seek_hole": false, 00:16:20.285 "seek_data": false, 00:16:20.285 "copy": true, 00:16:20.285 "nvme_iov_md": false 00:16:20.285 }, 00:16:20.285 "memory_domains": [ 00:16:20.285 { 00:16:20.285 "dma_device_id": "system", 00:16:20.285 "dma_device_type": 1 00:16:20.285 }, 00:16:20.285 { 00:16:20.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.285 "dma_device_type": 2 00:16:20.285 } 00:16:20.285 ], 00:16:20.285 "driver_specific": {} 00:16:20.285 } 00:16:20.285 ] 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.285 "name": "Existed_Raid", 00:16:20.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.285 "strip_size_kb": 64, 00:16:20.285 "state": "configuring", 00:16:20.285 "raid_level": "raid5f", 00:16:20.285 "superblock": false, 00:16:20.285 "num_base_bdevs": 3, 00:16:20.285 "num_base_bdevs_discovered": 2, 00:16:20.285 "num_base_bdevs_operational": 3, 00:16:20.285 "base_bdevs_list": [ 00:16:20.285 { 00:16:20.285 "name": "BaseBdev1", 00:16:20.285 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:20.285 "is_configured": true, 00:16:20.285 "data_offset": 0, 00:16:20.285 "data_size": 65536 00:16:20.285 }, 00:16:20.285 { 00:16:20.285 "name": null, 00:16:20.285 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:20.285 "is_configured": false, 00:16:20.285 "data_offset": 0, 00:16:20.285 "data_size": 65536 00:16:20.285 }, 00:16:20.285 { 00:16:20.285 "name": "BaseBdev3", 00:16:20.285 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:20.285 "is_configured": true, 00:16:20.285 "data_offset": 0, 00:16:20.285 "data_size": 65536 00:16:20.285 } 00:16:20.285 ] 00:16:20.285 }' 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.285 03:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.853 [2024-11-05 03:27:34.405456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.853 "name": "Existed_Raid", 00:16:20.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.853 "strip_size_kb": 64, 00:16:20.853 "state": "configuring", 00:16:20.853 "raid_level": "raid5f", 00:16:20.853 "superblock": false, 00:16:20.853 "num_base_bdevs": 3, 00:16:20.853 "num_base_bdevs_discovered": 1, 00:16:20.853 "num_base_bdevs_operational": 3, 00:16:20.853 "base_bdevs_list": [ 00:16:20.853 { 00:16:20.853 "name": "BaseBdev1", 00:16:20.853 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:20.853 "is_configured": true, 00:16:20.853 "data_offset": 0, 00:16:20.853 "data_size": 65536 00:16:20.853 }, 00:16:20.853 { 00:16:20.853 "name": null, 00:16:20.853 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:20.853 "is_configured": false, 00:16:20.853 "data_offset": 0, 00:16:20.853 "data_size": 65536 00:16:20.853 }, 00:16:20.853 { 00:16:20.853 "name": null, 00:16:20.853 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:20.853 "is_configured": false, 00:16:20.853 "data_offset": 0, 00:16:20.853 "data_size": 65536 00:16:20.853 } 00:16:20.853 ] 00:16:20.853 }' 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.853 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.421 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:21.421 03:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.421 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.421 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.421 03:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.421 [2024-11-05 03:27:35.005739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.421 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.679 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.679 "name": "Existed_Raid", 00:16:21.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.679 "strip_size_kb": 64, 00:16:21.679 "state": "configuring", 00:16:21.679 "raid_level": "raid5f", 00:16:21.679 "superblock": false, 00:16:21.679 "num_base_bdevs": 3, 00:16:21.679 "num_base_bdevs_discovered": 2, 00:16:21.679 "num_base_bdevs_operational": 3, 00:16:21.679 "base_bdevs_list": [ 00:16:21.679 { 00:16:21.679 "name": "BaseBdev1", 00:16:21.679 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:21.679 "is_configured": true, 00:16:21.679 "data_offset": 0, 00:16:21.679 "data_size": 65536 00:16:21.679 }, 00:16:21.679 { 00:16:21.679 "name": null, 00:16:21.679 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:21.679 "is_configured": false, 00:16:21.679 "data_offset": 0, 00:16:21.679 "data_size": 65536 00:16:21.680 }, 00:16:21.680 { 00:16:21.680 "name": "BaseBdev3", 00:16:21.680 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:21.680 "is_configured": true, 00:16:21.680 "data_offset": 0, 00:16:21.680 "data_size": 65536 00:16:21.680 } 00:16:21.680 ] 00:16:21.680 }' 00:16:21.680 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.680 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.938 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.938 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.938 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.938 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:21.938 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.197 [2024-11-05 03:27:35.589926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.197 "name": "Existed_Raid", 00:16:22.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.197 "strip_size_kb": 64, 00:16:22.197 "state": "configuring", 00:16:22.197 "raid_level": "raid5f", 00:16:22.197 "superblock": false, 00:16:22.197 "num_base_bdevs": 3, 00:16:22.197 "num_base_bdevs_discovered": 1, 00:16:22.197 "num_base_bdevs_operational": 3, 00:16:22.197 "base_bdevs_list": [ 00:16:22.197 { 00:16:22.197 "name": null, 00:16:22.197 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:22.197 "is_configured": false, 00:16:22.197 "data_offset": 0, 00:16:22.197 "data_size": 65536 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "name": null, 00:16:22.197 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:22.197 "is_configured": false, 00:16:22.197 "data_offset": 0, 00:16:22.197 "data_size": 65536 00:16:22.197 }, 00:16:22.197 { 00:16:22.197 "name": "BaseBdev3", 00:16:22.197 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:22.197 "is_configured": true, 00:16:22.197 "data_offset": 0, 00:16:22.197 "data_size": 65536 00:16:22.197 } 00:16:22.197 ] 00:16:22.197 }' 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.197 03:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 [2024-11-05 03:27:36.256001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.765 "name": "Existed_Raid", 00:16:22.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.765 "strip_size_kb": 64, 00:16:22.765 "state": "configuring", 00:16:22.765 "raid_level": "raid5f", 00:16:22.765 "superblock": false, 00:16:22.765 "num_base_bdevs": 3, 00:16:22.765 "num_base_bdevs_discovered": 2, 00:16:22.765 "num_base_bdevs_operational": 3, 00:16:22.765 "base_bdevs_list": [ 00:16:22.765 { 00:16:22.765 "name": null, 00:16:22.765 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:22.765 "is_configured": false, 00:16:22.765 "data_offset": 0, 00:16:22.765 "data_size": 65536 00:16:22.765 }, 00:16:22.765 { 00:16:22.765 "name": "BaseBdev2", 00:16:22.765 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:22.765 "is_configured": true, 00:16:22.765 "data_offset": 0, 00:16:22.765 "data_size": 65536 00:16:22.765 }, 00:16:22.765 { 00:16:22.765 "name": "BaseBdev3", 00:16:22.765 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:22.765 "is_configured": true, 00:16:22.765 "data_offset": 0, 00:16:22.765 "data_size": 65536 00:16:22.765 } 00:16:22.765 ] 00:16:22.765 }' 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.765 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.332 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.332 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.332 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:23.332 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 00b00565-ef5c-4d3e-b39e-781305096088 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.333 [2024-11-05 03:27:36.961007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:23.333 [2024-11-05 03:27:36.961054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:23.333 [2024-11-05 03:27:36.961068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:23.333 [2024-11-05 03:27:36.961413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:23.333 [2024-11-05 03:27:36.965897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:23.333 [2024-11-05 03:27:36.965921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:23.333 [2024-11-05 03:27:36.966238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.333 NewBaseBdev 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.333 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.592 [ 00:16:23.592 { 00:16:23.592 "name": "NewBaseBdev", 00:16:23.592 "aliases": [ 00:16:23.592 "00b00565-ef5c-4d3e-b39e-781305096088" 00:16:23.592 ], 00:16:23.592 "product_name": "Malloc disk", 00:16:23.592 "block_size": 512, 00:16:23.592 "num_blocks": 65536, 00:16:23.592 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:23.592 "assigned_rate_limits": { 00:16:23.592 "rw_ios_per_sec": 0, 00:16:23.592 "rw_mbytes_per_sec": 0, 00:16:23.592 "r_mbytes_per_sec": 0, 00:16:23.592 "w_mbytes_per_sec": 0 00:16:23.592 }, 00:16:23.592 "claimed": true, 00:16:23.592 "claim_type": "exclusive_write", 00:16:23.592 "zoned": false, 00:16:23.592 "supported_io_types": { 00:16:23.592 "read": true, 00:16:23.592 "write": true, 00:16:23.592 "unmap": true, 00:16:23.592 "flush": true, 00:16:23.592 "reset": true, 00:16:23.592 "nvme_admin": false, 00:16:23.592 "nvme_io": false, 00:16:23.592 "nvme_io_md": false, 00:16:23.592 "write_zeroes": true, 00:16:23.592 "zcopy": true, 00:16:23.592 "get_zone_info": false, 00:16:23.592 "zone_management": false, 00:16:23.592 "zone_append": false, 00:16:23.592 "compare": false, 00:16:23.592 "compare_and_write": false, 00:16:23.592 "abort": true, 00:16:23.592 "seek_hole": false, 00:16:23.592 "seek_data": false, 00:16:23.592 "copy": true, 00:16:23.592 "nvme_iov_md": false 00:16:23.592 }, 00:16:23.592 "memory_domains": [ 00:16:23.592 { 00:16:23.592 "dma_device_id": "system", 00:16:23.592 "dma_device_type": 1 00:16:23.592 }, 00:16:23.592 { 00:16:23.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.592 "dma_device_type": 2 00:16:23.592 } 00:16:23.592 ], 00:16:23.592 "driver_specific": {} 00:16:23.592 } 00:16:23.592 ] 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.592 03:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.592 "name": "Existed_Raid", 00:16:23.592 "uuid": "715c2813-97f5-461b-bf92-d7bb8b46e557", 00:16:23.592 "strip_size_kb": 64, 00:16:23.592 "state": "online", 00:16:23.592 "raid_level": "raid5f", 00:16:23.592 "superblock": false, 00:16:23.592 "num_base_bdevs": 3, 00:16:23.592 "num_base_bdevs_discovered": 3, 00:16:23.592 "num_base_bdevs_operational": 3, 00:16:23.592 "base_bdevs_list": [ 00:16:23.592 { 00:16:23.592 "name": "NewBaseBdev", 00:16:23.592 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:23.592 "is_configured": true, 00:16:23.592 "data_offset": 0, 00:16:23.592 "data_size": 65536 00:16:23.592 }, 00:16:23.592 { 00:16:23.592 "name": "BaseBdev2", 00:16:23.592 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:23.592 "is_configured": true, 00:16:23.592 "data_offset": 0, 00:16:23.592 "data_size": 65536 00:16:23.592 }, 00:16:23.592 { 00:16:23.592 "name": "BaseBdev3", 00:16:23.592 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:23.592 "is_configured": true, 00:16:23.592 "data_offset": 0, 00:16:23.592 "data_size": 65536 00:16:23.592 } 00:16:23.592 ] 00:16:23.592 }' 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.592 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.160 [2024-11-05 03:27:37.540276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:24.160 "name": "Existed_Raid", 00:16:24.160 "aliases": [ 00:16:24.160 "715c2813-97f5-461b-bf92-d7bb8b46e557" 00:16:24.160 ], 00:16:24.160 "product_name": "Raid Volume", 00:16:24.160 "block_size": 512, 00:16:24.160 "num_blocks": 131072, 00:16:24.160 "uuid": "715c2813-97f5-461b-bf92-d7bb8b46e557", 00:16:24.160 "assigned_rate_limits": { 00:16:24.160 "rw_ios_per_sec": 0, 00:16:24.160 "rw_mbytes_per_sec": 0, 00:16:24.160 "r_mbytes_per_sec": 0, 00:16:24.160 "w_mbytes_per_sec": 0 00:16:24.160 }, 00:16:24.160 "claimed": false, 00:16:24.160 "zoned": false, 00:16:24.160 "supported_io_types": { 00:16:24.160 "read": true, 00:16:24.160 "write": true, 00:16:24.160 "unmap": false, 00:16:24.160 "flush": false, 00:16:24.160 "reset": true, 00:16:24.160 "nvme_admin": false, 00:16:24.160 "nvme_io": false, 00:16:24.160 "nvme_io_md": false, 00:16:24.160 "write_zeroes": true, 00:16:24.160 "zcopy": false, 00:16:24.160 "get_zone_info": false, 00:16:24.160 "zone_management": false, 00:16:24.160 "zone_append": false, 00:16:24.160 "compare": false, 00:16:24.160 "compare_and_write": false, 00:16:24.160 "abort": false, 00:16:24.160 "seek_hole": false, 00:16:24.160 "seek_data": false, 00:16:24.160 "copy": false, 00:16:24.160 "nvme_iov_md": false 00:16:24.160 }, 00:16:24.160 "driver_specific": { 00:16:24.160 "raid": { 00:16:24.160 "uuid": "715c2813-97f5-461b-bf92-d7bb8b46e557", 00:16:24.160 "strip_size_kb": 64, 00:16:24.160 "state": "online", 00:16:24.160 "raid_level": "raid5f", 00:16:24.160 "superblock": false, 00:16:24.160 "num_base_bdevs": 3, 00:16:24.160 "num_base_bdevs_discovered": 3, 00:16:24.160 "num_base_bdevs_operational": 3, 00:16:24.160 "base_bdevs_list": [ 00:16:24.160 { 00:16:24.160 "name": "NewBaseBdev", 00:16:24.160 "uuid": "00b00565-ef5c-4d3e-b39e-781305096088", 00:16:24.160 "is_configured": true, 00:16:24.160 "data_offset": 0, 00:16:24.160 "data_size": 65536 00:16:24.160 }, 00:16:24.160 { 00:16:24.160 "name": "BaseBdev2", 00:16:24.160 "uuid": "4c5d42db-f5a5-4556-9cca-dc759ec89d0f", 00:16:24.160 "is_configured": true, 00:16:24.160 "data_offset": 0, 00:16:24.160 "data_size": 65536 00:16:24.160 }, 00:16:24.160 { 00:16:24.160 "name": "BaseBdev3", 00:16:24.160 "uuid": "01545144-d6e5-40ab-a816-f1dee0baa74a", 00:16:24.160 "is_configured": true, 00:16:24.160 "data_offset": 0, 00:16:24.160 "data_size": 65536 00:16:24.160 } 00:16:24.160 ] 00:16:24.160 } 00:16:24.160 } 00:16:24.160 }' 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:24.160 BaseBdev2 00:16:24.160 BaseBdev3' 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:24.160 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.161 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.420 [2024-11-05 03:27:37.856138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.420 [2024-11-05 03:27:37.856171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.420 [2024-11-05 03:27:37.856275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.420 [2024-11-05 03:27:37.856729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.420 [2024-11-05 03:27:37.856752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79963 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 79963 ']' 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 79963 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79963 00:16:24.420 killing process with pid 79963 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79963' 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 79963 00:16:24.420 03:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 79963 00:16:24.420 [2024-11-05 03:27:37.898711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.679 [2024-11-05 03:27:38.147638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:25.615 00:16:25.615 real 0m12.170s 00:16:25.615 user 0m20.263s 00:16:25.615 sys 0m1.773s 00:16:25.615 ************************************ 00:16:25.615 END TEST raid5f_state_function_test 00:16:25.615 ************************************ 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.615 03:27:39 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:25.615 03:27:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:25.615 03:27:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:25.615 03:27:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.615 ************************************ 00:16:25.615 START TEST raid5f_state_function_test_sb 00:16:25.615 ************************************ 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:25.615 Process raid pid: 80601 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80601 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80601' 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80601 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80601 ']' 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.615 03:27:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.874 [2024-11-05 03:27:39.330560] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:16:25.874 [2024-11-05 03:27:39.330759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.132 [2024-11-05 03:27:39.518256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.132 [2024-11-05 03:27:39.645577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.391 [2024-11-05 03:27:39.841350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.391 [2024-11-05 03:27:39.841485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.958 [2024-11-05 03:27:40.309305] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.958 [2024-11-05 03:27:40.309388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.958 [2024-11-05 03:27:40.309406] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.958 [2024-11-05 03:27:40.309422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.958 [2024-11-05 03:27:40.309432] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.958 [2024-11-05 03:27:40.309445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.958 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.958 "name": "Existed_Raid", 00:16:26.958 "uuid": "ae4d1cf4-b389-4823-9c61-4b7835a1bd48", 00:16:26.958 "strip_size_kb": 64, 00:16:26.958 "state": "configuring", 00:16:26.958 "raid_level": "raid5f", 00:16:26.958 "superblock": true, 00:16:26.958 "num_base_bdevs": 3, 00:16:26.958 "num_base_bdevs_discovered": 0, 00:16:26.958 "num_base_bdevs_operational": 3, 00:16:26.958 "base_bdevs_list": [ 00:16:26.958 { 00:16:26.958 "name": "BaseBdev1", 00:16:26.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.958 "is_configured": false, 00:16:26.959 "data_offset": 0, 00:16:26.959 "data_size": 0 00:16:26.959 }, 00:16:26.959 { 00:16:26.959 "name": "BaseBdev2", 00:16:26.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.959 "is_configured": false, 00:16:26.959 "data_offset": 0, 00:16:26.959 "data_size": 0 00:16:26.959 }, 00:16:26.959 { 00:16:26.959 "name": "BaseBdev3", 00:16:26.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.959 "is_configured": false, 00:16:26.959 "data_offset": 0, 00:16:26.959 "data_size": 0 00:16:26.959 } 00:16:26.959 ] 00:16:26.959 }' 00:16:26.959 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.959 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.217 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.217 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.217 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.217 [2024-11-05 03:27:40.805468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.218 [2024-11-05 03:27:40.805651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.218 [2024-11-05 03:27:40.817434] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.218 [2024-11-05 03:27:40.817610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.218 [2024-11-05 03:27:40.817739] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.218 [2024-11-05 03:27:40.817811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.218 [2024-11-05 03:27:40.817918] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.218 [2024-11-05 03:27:40.817975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.218 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.476 [2024-11-05 03:27:40.867304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.476 BaseBdev1 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.476 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.476 [ 00:16:27.476 { 00:16:27.476 "name": "BaseBdev1", 00:16:27.476 "aliases": [ 00:16:27.476 "965ea208-8e74-41df-b38d-c8b234428ec4" 00:16:27.476 ], 00:16:27.476 "product_name": "Malloc disk", 00:16:27.476 "block_size": 512, 00:16:27.476 "num_blocks": 65536, 00:16:27.476 "uuid": "965ea208-8e74-41df-b38d-c8b234428ec4", 00:16:27.476 "assigned_rate_limits": { 00:16:27.476 "rw_ios_per_sec": 0, 00:16:27.476 "rw_mbytes_per_sec": 0, 00:16:27.476 "r_mbytes_per_sec": 0, 00:16:27.476 "w_mbytes_per_sec": 0 00:16:27.476 }, 00:16:27.476 "claimed": true, 00:16:27.476 "claim_type": "exclusive_write", 00:16:27.476 "zoned": false, 00:16:27.476 "supported_io_types": { 00:16:27.476 "read": true, 00:16:27.476 "write": true, 00:16:27.476 "unmap": true, 00:16:27.476 "flush": true, 00:16:27.476 "reset": true, 00:16:27.476 "nvme_admin": false, 00:16:27.476 "nvme_io": false, 00:16:27.476 "nvme_io_md": false, 00:16:27.476 "write_zeroes": true, 00:16:27.476 "zcopy": true, 00:16:27.476 "get_zone_info": false, 00:16:27.476 "zone_management": false, 00:16:27.476 "zone_append": false, 00:16:27.476 "compare": false, 00:16:27.476 "compare_and_write": false, 00:16:27.476 "abort": true, 00:16:27.476 "seek_hole": false, 00:16:27.476 "seek_data": false, 00:16:27.476 "copy": true, 00:16:27.476 "nvme_iov_md": false 00:16:27.476 }, 00:16:27.476 "memory_domains": [ 00:16:27.476 { 00:16:27.476 "dma_device_id": "system", 00:16:27.476 "dma_device_type": 1 00:16:27.476 }, 00:16:27.476 { 00:16:27.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.477 "dma_device_type": 2 00:16:27.477 } 00:16:27.477 ], 00:16:27.477 "driver_specific": {} 00:16:27.477 } 00:16:27.477 ] 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.477 "name": "Existed_Raid", 00:16:27.477 "uuid": "c6d191e4-d72b-4e55-be11-493df5db6df3", 00:16:27.477 "strip_size_kb": 64, 00:16:27.477 "state": "configuring", 00:16:27.477 "raid_level": "raid5f", 00:16:27.477 "superblock": true, 00:16:27.477 "num_base_bdevs": 3, 00:16:27.477 "num_base_bdevs_discovered": 1, 00:16:27.477 "num_base_bdevs_operational": 3, 00:16:27.477 "base_bdevs_list": [ 00:16:27.477 { 00:16:27.477 "name": "BaseBdev1", 00:16:27.477 "uuid": "965ea208-8e74-41df-b38d-c8b234428ec4", 00:16:27.477 "is_configured": true, 00:16:27.477 "data_offset": 2048, 00:16:27.477 "data_size": 63488 00:16:27.477 }, 00:16:27.477 { 00:16:27.477 "name": "BaseBdev2", 00:16:27.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.477 "is_configured": false, 00:16:27.477 "data_offset": 0, 00:16:27.477 "data_size": 0 00:16:27.477 }, 00:16:27.477 { 00:16:27.477 "name": "BaseBdev3", 00:16:27.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.477 "is_configured": false, 00:16:27.477 "data_offset": 0, 00:16:27.477 "data_size": 0 00:16:27.477 } 00:16:27.477 ] 00:16:27.477 }' 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.477 03:27:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.044 [2024-11-05 03:27:41.443548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.044 [2024-11-05 03:27:41.443608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.044 [2024-11-05 03:27:41.455614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.044 [2024-11-05 03:27:41.458421] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.044 [2024-11-05 03:27:41.458626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.044 [2024-11-05 03:27:41.458746] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.044 [2024-11-05 03:27:41.458912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.044 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.044 "name": "Existed_Raid", 00:16:28.044 "uuid": "43d5f3a8-2328-4c34-a6d3-84c59aeae4c4", 00:16:28.044 "strip_size_kb": 64, 00:16:28.044 "state": "configuring", 00:16:28.044 "raid_level": "raid5f", 00:16:28.044 "superblock": true, 00:16:28.044 "num_base_bdevs": 3, 00:16:28.044 "num_base_bdevs_discovered": 1, 00:16:28.044 "num_base_bdevs_operational": 3, 00:16:28.044 "base_bdevs_list": [ 00:16:28.044 { 00:16:28.044 "name": "BaseBdev1", 00:16:28.044 "uuid": "965ea208-8e74-41df-b38d-c8b234428ec4", 00:16:28.044 "is_configured": true, 00:16:28.044 "data_offset": 2048, 00:16:28.044 "data_size": 63488 00:16:28.044 }, 00:16:28.044 { 00:16:28.045 "name": "BaseBdev2", 00:16:28.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.045 "is_configured": false, 00:16:28.045 "data_offset": 0, 00:16:28.045 "data_size": 0 00:16:28.045 }, 00:16:28.045 { 00:16:28.045 "name": "BaseBdev3", 00:16:28.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.045 "is_configured": false, 00:16:28.045 "data_offset": 0, 00:16:28.045 "data_size": 0 00:16:28.045 } 00:16:28.045 ] 00:16:28.045 }' 00:16:28.045 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.045 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.612 03:27:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.612 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.612 03:27:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.612 [2024-11-05 03:27:42.036049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.612 BaseBdev2 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.612 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.612 [ 00:16:28.612 { 00:16:28.612 "name": "BaseBdev2", 00:16:28.612 "aliases": [ 00:16:28.612 "1d3b9f40-93a5-4725-bc96-a11ca5c6b249" 00:16:28.612 ], 00:16:28.612 "product_name": "Malloc disk", 00:16:28.612 "block_size": 512, 00:16:28.612 "num_blocks": 65536, 00:16:28.612 "uuid": "1d3b9f40-93a5-4725-bc96-a11ca5c6b249", 00:16:28.612 "assigned_rate_limits": { 00:16:28.612 "rw_ios_per_sec": 0, 00:16:28.612 "rw_mbytes_per_sec": 0, 00:16:28.612 "r_mbytes_per_sec": 0, 00:16:28.612 "w_mbytes_per_sec": 0 00:16:28.612 }, 00:16:28.612 "claimed": true, 00:16:28.612 "claim_type": "exclusive_write", 00:16:28.612 "zoned": false, 00:16:28.612 "supported_io_types": { 00:16:28.612 "read": true, 00:16:28.612 "write": true, 00:16:28.612 "unmap": true, 00:16:28.612 "flush": true, 00:16:28.612 "reset": true, 00:16:28.612 "nvme_admin": false, 00:16:28.612 "nvme_io": false, 00:16:28.613 "nvme_io_md": false, 00:16:28.613 "write_zeroes": true, 00:16:28.613 "zcopy": true, 00:16:28.613 "get_zone_info": false, 00:16:28.613 "zone_management": false, 00:16:28.613 "zone_append": false, 00:16:28.613 "compare": false, 00:16:28.613 "compare_and_write": false, 00:16:28.613 "abort": true, 00:16:28.613 "seek_hole": false, 00:16:28.613 "seek_data": false, 00:16:28.613 "copy": true, 00:16:28.613 "nvme_iov_md": false 00:16:28.613 }, 00:16:28.613 "memory_domains": [ 00:16:28.613 { 00:16:28.613 "dma_device_id": "system", 00:16:28.613 "dma_device_type": 1 00:16:28.613 }, 00:16:28.613 { 00:16:28.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.613 "dma_device_type": 2 00:16:28.613 } 00:16:28.613 ], 00:16:28.613 "driver_specific": {} 00:16:28.613 } 00:16:28.613 ] 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.613 "name": "Existed_Raid", 00:16:28.613 "uuid": "43d5f3a8-2328-4c34-a6d3-84c59aeae4c4", 00:16:28.613 "strip_size_kb": 64, 00:16:28.613 "state": "configuring", 00:16:28.613 "raid_level": "raid5f", 00:16:28.613 "superblock": true, 00:16:28.613 "num_base_bdevs": 3, 00:16:28.613 "num_base_bdevs_discovered": 2, 00:16:28.613 "num_base_bdevs_operational": 3, 00:16:28.613 "base_bdevs_list": [ 00:16:28.613 { 00:16:28.613 "name": "BaseBdev1", 00:16:28.613 "uuid": "965ea208-8e74-41df-b38d-c8b234428ec4", 00:16:28.613 "is_configured": true, 00:16:28.613 "data_offset": 2048, 00:16:28.613 "data_size": 63488 00:16:28.613 }, 00:16:28.613 { 00:16:28.613 "name": "BaseBdev2", 00:16:28.613 "uuid": "1d3b9f40-93a5-4725-bc96-a11ca5c6b249", 00:16:28.613 "is_configured": true, 00:16:28.613 "data_offset": 2048, 00:16:28.613 "data_size": 63488 00:16:28.613 }, 00:16:28.613 { 00:16:28.613 "name": "BaseBdev3", 00:16:28.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.613 "is_configured": false, 00:16:28.613 "data_offset": 0, 00:16:28.613 "data_size": 0 00:16:28.613 } 00:16:28.613 ] 00:16:28.613 }' 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.613 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.181 [2024-11-05 03:27:42.645824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.181 [2024-11-05 03:27:42.646144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:29.181 [2024-11-05 03:27:42.646172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:29.181 BaseBdev3 00:16:29.181 [2024-11-05 03:27:42.646557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.181 [2024-11-05 03:27:42.651957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:29.181 [2024-11-05 03:27:42.651982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:29.181 [2024-11-05 03:27:42.652291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.181 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.181 [ 00:16:29.181 { 00:16:29.181 "name": "BaseBdev3", 00:16:29.181 "aliases": [ 00:16:29.181 "06fe5f66-7776-49f3-b432-e9b0dc0623e6" 00:16:29.181 ], 00:16:29.181 "product_name": "Malloc disk", 00:16:29.181 "block_size": 512, 00:16:29.181 "num_blocks": 65536, 00:16:29.181 "uuid": "06fe5f66-7776-49f3-b432-e9b0dc0623e6", 00:16:29.181 "assigned_rate_limits": { 00:16:29.181 "rw_ios_per_sec": 0, 00:16:29.181 "rw_mbytes_per_sec": 0, 00:16:29.181 "r_mbytes_per_sec": 0, 00:16:29.181 "w_mbytes_per_sec": 0 00:16:29.181 }, 00:16:29.181 "claimed": true, 00:16:29.181 "claim_type": "exclusive_write", 00:16:29.181 "zoned": false, 00:16:29.181 "supported_io_types": { 00:16:29.181 "read": true, 00:16:29.181 "write": true, 00:16:29.181 "unmap": true, 00:16:29.181 "flush": true, 00:16:29.181 "reset": true, 00:16:29.181 "nvme_admin": false, 00:16:29.181 "nvme_io": false, 00:16:29.181 "nvme_io_md": false, 00:16:29.181 "write_zeroes": true, 00:16:29.181 "zcopy": true, 00:16:29.181 "get_zone_info": false, 00:16:29.181 "zone_management": false, 00:16:29.181 "zone_append": false, 00:16:29.181 "compare": false, 00:16:29.181 "compare_and_write": false, 00:16:29.181 "abort": true, 00:16:29.181 "seek_hole": false, 00:16:29.181 "seek_data": false, 00:16:29.181 "copy": true, 00:16:29.181 "nvme_iov_md": false 00:16:29.181 }, 00:16:29.181 "memory_domains": [ 00:16:29.181 { 00:16:29.181 "dma_device_id": "system", 00:16:29.181 "dma_device_type": 1 00:16:29.181 }, 00:16:29.182 { 00:16:29.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.182 "dma_device_type": 2 00:16:29.182 } 00:16:29.182 ], 00:16:29.182 "driver_specific": {} 00:16:29.182 } 00:16:29.182 ] 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.182 "name": "Existed_Raid", 00:16:29.182 "uuid": "43d5f3a8-2328-4c34-a6d3-84c59aeae4c4", 00:16:29.182 "strip_size_kb": 64, 00:16:29.182 "state": "online", 00:16:29.182 "raid_level": "raid5f", 00:16:29.182 "superblock": true, 00:16:29.182 "num_base_bdevs": 3, 00:16:29.182 "num_base_bdevs_discovered": 3, 00:16:29.182 "num_base_bdevs_operational": 3, 00:16:29.182 "base_bdevs_list": [ 00:16:29.182 { 00:16:29.182 "name": "BaseBdev1", 00:16:29.182 "uuid": "965ea208-8e74-41df-b38d-c8b234428ec4", 00:16:29.182 "is_configured": true, 00:16:29.182 "data_offset": 2048, 00:16:29.182 "data_size": 63488 00:16:29.182 }, 00:16:29.182 { 00:16:29.182 "name": "BaseBdev2", 00:16:29.182 "uuid": "1d3b9f40-93a5-4725-bc96-a11ca5c6b249", 00:16:29.182 "is_configured": true, 00:16:29.182 "data_offset": 2048, 00:16:29.182 "data_size": 63488 00:16:29.182 }, 00:16:29.182 { 00:16:29.182 "name": "BaseBdev3", 00:16:29.182 "uuid": "06fe5f66-7776-49f3-b432-e9b0dc0623e6", 00:16:29.182 "is_configured": true, 00:16:29.182 "data_offset": 2048, 00:16:29.182 "data_size": 63488 00:16:29.182 } 00:16:29.182 ] 00:16:29.182 }' 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.182 03:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.775 [2024-11-05 03:27:43.210393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.775 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.775 "name": "Existed_Raid", 00:16:29.775 "aliases": [ 00:16:29.775 "43d5f3a8-2328-4c34-a6d3-84c59aeae4c4" 00:16:29.775 ], 00:16:29.775 "product_name": "Raid Volume", 00:16:29.775 "block_size": 512, 00:16:29.775 "num_blocks": 126976, 00:16:29.775 "uuid": "43d5f3a8-2328-4c34-a6d3-84c59aeae4c4", 00:16:29.775 "assigned_rate_limits": { 00:16:29.775 "rw_ios_per_sec": 0, 00:16:29.775 "rw_mbytes_per_sec": 0, 00:16:29.775 "r_mbytes_per_sec": 0, 00:16:29.775 "w_mbytes_per_sec": 0 00:16:29.775 }, 00:16:29.775 "claimed": false, 00:16:29.775 "zoned": false, 00:16:29.775 "supported_io_types": { 00:16:29.775 "read": true, 00:16:29.775 "write": true, 00:16:29.775 "unmap": false, 00:16:29.775 "flush": false, 00:16:29.775 "reset": true, 00:16:29.775 "nvme_admin": false, 00:16:29.775 "nvme_io": false, 00:16:29.775 "nvme_io_md": false, 00:16:29.775 "write_zeroes": true, 00:16:29.775 "zcopy": false, 00:16:29.775 "get_zone_info": false, 00:16:29.775 "zone_management": false, 00:16:29.775 "zone_append": false, 00:16:29.775 "compare": false, 00:16:29.775 "compare_and_write": false, 00:16:29.775 "abort": false, 00:16:29.775 "seek_hole": false, 00:16:29.775 "seek_data": false, 00:16:29.775 "copy": false, 00:16:29.775 "nvme_iov_md": false 00:16:29.775 }, 00:16:29.775 "driver_specific": { 00:16:29.775 "raid": { 00:16:29.775 "uuid": "43d5f3a8-2328-4c34-a6d3-84c59aeae4c4", 00:16:29.775 "strip_size_kb": 64, 00:16:29.775 "state": "online", 00:16:29.775 "raid_level": "raid5f", 00:16:29.775 "superblock": true, 00:16:29.775 "num_base_bdevs": 3, 00:16:29.776 "num_base_bdevs_discovered": 3, 00:16:29.776 "num_base_bdevs_operational": 3, 00:16:29.776 "base_bdevs_list": [ 00:16:29.776 { 00:16:29.776 "name": "BaseBdev1", 00:16:29.776 "uuid": "965ea208-8e74-41df-b38d-c8b234428ec4", 00:16:29.776 "is_configured": true, 00:16:29.776 "data_offset": 2048, 00:16:29.776 "data_size": 63488 00:16:29.776 }, 00:16:29.776 { 00:16:29.776 "name": "BaseBdev2", 00:16:29.776 "uuid": "1d3b9f40-93a5-4725-bc96-a11ca5c6b249", 00:16:29.776 "is_configured": true, 00:16:29.776 "data_offset": 2048, 00:16:29.776 "data_size": 63488 00:16:29.776 }, 00:16:29.776 { 00:16:29.776 "name": "BaseBdev3", 00:16:29.776 "uuid": "06fe5f66-7776-49f3-b432-e9b0dc0623e6", 00:16:29.776 "is_configured": true, 00:16:29.776 "data_offset": 2048, 00:16:29.776 "data_size": 63488 00:16:29.776 } 00:16:29.776 ] 00:16:29.776 } 00:16:29.776 } 00:16:29.776 }' 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:29.776 BaseBdev2 00:16:29.776 BaseBdev3' 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.776 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.038 [2024-11-05 03:27:43.538360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.038 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.039 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.039 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.296 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.296 "name": "Existed_Raid", 00:16:30.296 "uuid": "43d5f3a8-2328-4c34-a6d3-84c59aeae4c4", 00:16:30.296 "strip_size_kb": 64, 00:16:30.296 "state": "online", 00:16:30.296 "raid_level": "raid5f", 00:16:30.296 "superblock": true, 00:16:30.296 "num_base_bdevs": 3, 00:16:30.296 "num_base_bdevs_discovered": 2, 00:16:30.296 "num_base_bdevs_operational": 2, 00:16:30.296 "base_bdevs_list": [ 00:16:30.296 { 00:16:30.296 "name": null, 00:16:30.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.296 "is_configured": false, 00:16:30.296 "data_offset": 0, 00:16:30.296 "data_size": 63488 00:16:30.296 }, 00:16:30.296 { 00:16:30.296 "name": "BaseBdev2", 00:16:30.296 "uuid": "1d3b9f40-93a5-4725-bc96-a11ca5c6b249", 00:16:30.296 "is_configured": true, 00:16:30.296 "data_offset": 2048, 00:16:30.296 "data_size": 63488 00:16:30.296 }, 00:16:30.296 { 00:16:30.296 "name": "BaseBdev3", 00:16:30.296 "uuid": "06fe5f66-7776-49f3-b432-e9b0dc0623e6", 00:16:30.296 "is_configured": true, 00:16:30.296 "data_offset": 2048, 00:16:30.296 "data_size": 63488 00:16:30.296 } 00:16:30.296 ] 00:16:30.296 }' 00:16:30.296 03:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.296 03:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.555 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:30.555 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:30.555 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.555 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.555 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.555 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:30.555 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.813 [2024-11-05 03:27:44.214165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:30.813 [2024-11-05 03:27:44.214353] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.813 [2024-11-05 03:27:44.301448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.813 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.813 [2024-11-05 03:27:44.377504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.813 [2024-11-05 03:27:44.377570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.072 BaseBdev2 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.072 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.072 [ 00:16:31.072 { 00:16:31.072 "name": "BaseBdev2", 00:16:31.072 "aliases": [ 00:16:31.072 "a7841b69-5219-4ef7-b72f-8427353ea3ec" 00:16:31.072 ], 00:16:31.072 "product_name": "Malloc disk", 00:16:31.072 "block_size": 512, 00:16:31.072 "num_blocks": 65536, 00:16:31.072 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:31.073 "assigned_rate_limits": { 00:16:31.073 "rw_ios_per_sec": 0, 00:16:31.073 "rw_mbytes_per_sec": 0, 00:16:31.073 "r_mbytes_per_sec": 0, 00:16:31.073 "w_mbytes_per_sec": 0 00:16:31.073 }, 00:16:31.073 "claimed": false, 00:16:31.073 "zoned": false, 00:16:31.073 "supported_io_types": { 00:16:31.073 "read": true, 00:16:31.073 "write": true, 00:16:31.073 "unmap": true, 00:16:31.073 "flush": true, 00:16:31.073 "reset": true, 00:16:31.073 "nvme_admin": false, 00:16:31.073 "nvme_io": false, 00:16:31.073 "nvme_io_md": false, 00:16:31.073 "write_zeroes": true, 00:16:31.073 "zcopy": true, 00:16:31.073 "get_zone_info": false, 00:16:31.073 "zone_management": false, 00:16:31.073 "zone_append": false, 00:16:31.073 "compare": false, 00:16:31.073 "compare_and_write": false, 00:16:31.073 "abort": true, 00:16:31.073 "seek_hole": false, 00:16:31.073 "seek_data": false, 00:16:31.073 "copy": true, 00:16:31.073 "nvme_iov_md": false 00:16:31.073 }, 00:16:31.073 "memory_domains": [ 00:16:31.073 { 00:16:31.073 "dma_device_id": "system", 00:16:31.073 "dma_device_type": 1 00:16:31.073 }, 00:16:31.073 { 00:16:31.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.073 "dma_device_type": 2 00:16:31.073 } 00:16:31.073 ], 00:16:31.073 "driver_specific": {} 00:16:31.073 } 00:16:31.073 ] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.073 BaseBdev3 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.073 [ 00:16:31.073 { 00:16:31.073 "name": "BaseBdev3", 00:16:31.073 "aliases": [ 00:16:31.073 "272c51ff-6681-4d53-9ea5-753415910211" 00:16:31.073 ], 00:16:31.073 "product_name": "Malloc disk", 00:16:31.073 "block_size": 512, 00:16:31.073 "num_blocks": 65536, 00:16:31.073 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:31.073 "assigned_rate_limits": { 00:16:31.073 "rw_ios_per_sec": 0, 00:16:31.073 "rw_mbytes_per_sec": 0, 00:16:31.073 "r_mbytes_per_sec": 0, 00:16:31.073 "w_mbytes_per_sec": 0 00:16:31.073 }, 00:16:31.073 "claimed": false, 00:16:31.073 "zoned": false, 00:16:31.073 "supported_io_types": { 00:16:31.073 "read": true, 00:16:31.073 "write": true, 00:16:31.073 "unmap": true, 00:16:31.073 "flush": true, 00:16:31.073 "reset": true, 00:16:31.073 "nvme_admin": false, 00:16:31.073 "nvme_io": false, 00:16:31.073 "nvme_io_md": false, 00:16:31.073 "write_zeroes": true, 00:16:31.073 "zcopy": true, 00:16:31.073 "get_zone_info": false, 00:16:31.073 "zone_management": false, 00:16:31.073 "zone_append": false, 00:16:31.073 "compare": false, 00:16:31.073 "compare_and_write": false, 00:16:31.073 "abort": true, 00:16:31.073 "seek_hole": false, 00:16:31.073 "seek_data": false, 00:16:31.073 "copy": true, 00:16:31.073 "nvme_iov_md": false 00:16:31.073 }, 00:16:31.073 "memory_domains": [ 00:16:31.073 { 00:16:31.073 "dma_device_id": "system", 00:16:31.073 "dma_device_type": 1 00:16:31.073 }, 00:16:31.073 { 00:16:31.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.073 "dma_device_type": 2 00:16:31.073 } 00:16:31.073 ], 00:16:31.073 "driver_specific": {} 00:16:31.073 } 00:16:31.073 ] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.073 [2024-11-05 03:27:44.689168] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.073 [2024-11-05 03:27:44.689406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.073 [2024-11-05 03:27:44.689539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.073 [2024-11-05 03:27:44.692092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.073 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.332 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.332 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.332 "name": "Existed_Raid", 00:16:31.332 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:31.332 "strip_size_kb": 64, 00:16:31.332 "state": "configuring", 00:16:31.332 "raid_level": "raid5f", 00:16:31.332 "superblock": true, 00:16:31.332 "num_base_bdevs": 3, 00:16:31.332 "num_base_bdevs_discovered": 2, 00:16:31.332 "num_base_bdevs_operational": 3, 00:16:31.332 "base_bdevs_list": [ 00:16:31.332 { 00:16:31.332 "name": "BaseBdev1", 00:16:31.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.332 "is_configured": false, 00:16:31.332 "data_offset": 0, 00:16:31.332 "data_size": 0 00:16:31.332 }, 00:16:31.332 { 00:16:31.332 "name": "BaseBdev2", 00:16:31.332 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:31.332 "is_configured": true, 00:16:31.332 "data_offset": 2048, 00:16:31.332 "data_size": 63488 00:16:31.332 }, 00:16:31.332 { 00:16:31.332 "name": "BaseBdev3", 00:16:31.332 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:31.332 "is_configured": true, 00:16:31.332 "data_offset": 2048, 00:16:31.332 "data_size": 63488 00:16:31.332 } 00:16:31.332 ] 00:16:31.332 }' 00:16:31.332 03:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.332 03:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.899 [2024-11-05 03:27:45.253373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.899 "name": "Existed_Raid", 00:16:31.899 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:31.899 "strip_size_kb": 64, 00:16:31.899 "state": "configuring", 00:16:31.899 "raid_level": "raid5f", 00:16:31.899 "superblock": true, 00:16:31.899 "num_base_bdevs": 3, 00:16:31.899 "num_base_bdevs_discovered": 1, 00:16:31.899 "num_base_bdevs_operational": 3, 00:16:31.899 "base_bdevs_list": [ 00:16:31.899 { 00:16:31.899 "name": "BaseBdev1", 00:16:31.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.899 "is_configured": false, 00:16:31.899 "data_offset": 0, 00:16:31.899 "data_size": 0 00:16:31.899 }, 00:16:31.899 { 00:16:31.899 "name": null, 00:16:31.899 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:31.899 "is_configured": false, 00:16:31.899 "data_offset": 0, 00:16:31.899 "data_size": 63488 00:16:31.899 }, 00:16:31.899 { 00:16:31.899 "name": "BaseBdev3", 00:16:31.899 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:31.899 "is_configured": true, 00:16:31.899 "data_offset": 2048, 00:16:31.899 "data_size": 63488 00:16:31.899 } 00:16:31.899 ] 00:16:31.899 }' 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.899 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.467 [2024-11-05 03:27:45.914962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.467 BaseBdev1 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.467 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.468 [ 00:16:32.468 { 00:16:32.468 "name": "BaseBdev1", 00:16:32.468 "aliases": [ 00:16:32.468 "c86f0b64-75dd-47fc-a529-a434a1552d7e" 00:16:32.468 ], 00:16:32.468 "product_name": "Malloc disk", 00:16:32.468 "block_size": 512, 00:16:32.468 "num_blocks": 65536, 00:16:32.468 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:32.468 "assigned_rate_limits": { 00:16:32.468 "rw_ios_per_sec": 0, 00:16:32.468 "rw_mbytes_per_sec": 0, 00:16:32.468 "r_mbytes_per_sec": 0, 00:16:32.468 "w_mbytes_per_sec": 0 00:16:32.468 }, 00:16:32.468 "claimed": true, 00:16:32.468 "claim_type": "exclusive_write", 00:16:32.468 "zoned": false, 00:16:32.468 "supported_io_types": { 00:16:32.468 "read": true, 00:16:32.468 "write": true, 00:16:32.468 "unmap": true, 00:16:32.468 "flush": true, 00:16:32.468 "reset": true, 00:16:32.468 "nvme_admin": false, 00:16:32.468 "nvme_io": false, 00:16:32.468 "nvme_io_md": false, 00:16:32.468 "write_zeroes": true, 00:16:32.468 "zcopy": true, 00:16:32.468 "get_zone_info": false, 00:16:32.468 "zone_management": false, 00:16:32.468 "zone_append": false, 00:16:32.468 "compare": false, 00:16:32.468 "compare_and_write": false, 00:16:32.468 "abort": true, 00:16:32.468 "seek_hole": false, 00:16:32.468 "seek_data": false, 00:16:32.468 "copy": true, 00:16:32.468 "nvme_iov_md": false 00:16:32.468 }, 00:16:32.468 "memory_domains": [ 00:16:32.468 { 00:16:32.468 "dma_device_id": "system", 00:16:32.468 "dma_device_type": 1 00:16:32.468 }, 00:16:32.468 { 00:16:32.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.468 "dma_device_type": 2 00:16:32.468 } 00:16:32.468 ], 00:16:32.468 "driver_specific": {} 00:16:32.468 } 00:16:32.468 ] 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.468 03:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.468 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.468 "name": "Existed_Raid", 00:16:32.468 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:32.468 "strip_size_kb": 64, 00:16:32.468 "state": "configuring", 00:16:32.468 "raid_level": "raid5f", 00:16:32.468 "superblock": true, 00:16:32.468 "num_base_bdevs": 3, 00:16:32.468 "num_base_bdevs_discovered": 2, 00:16:32.468 "num_base_bdevs_operational": 3, 00:16:32.468 "base_bdevs_list": [ 00:16:32.468 { 00:16:32.468 "name": "BaseBdev1", 00:16:32.468 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:32.468 "is_configured": true, 00:16:32.468 "data_offset": 2048, 00:16:32.468 "data_size": 63488 00:16:32.468 }, 00:16:32.468 { 00:16:32.468 "name": null, 00:16:32.468 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:32.468 "is_configured": false, 00:16:32.468 "data_offset": 0, 00:16:32.468 "data_size": 63488 00:16:32.468 }, 00:16:32.468 { 00:16:32.468 "name": "BaseBdev3", 00:16:32.468 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:32.468 "is_configured": true, 00:16:32.468 "data_offset": 2048, 00:16:32.468 "data_size": 63488 00:16:32.468 } 00:16:32.468 ] 00:16:32.468 }' 00:16:32.468 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.468 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.035 [2024-11-05 03:27:46.563297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.035 "name": "Existed_Raid", 00:16:33.035 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:33.035 "strip_size_kb": 64, 00:16:33.035 "state": "configuring", 00:16:33.035 "raid_level": "raid5f", 00:16:33.035 "superblock": true, 00:16:33.035 "num_base_bdevs": 3, 00:16:33.035 "num_base_bdevs_discovered": 1, 00:16:33.035 "num_base_bdevs_operational": 3, 00:16:33.035 "base_bdevs_list": [ 00:16:33.035 { 00:16:33.035 "name": "BaseBdev1", 00:16:33.035 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:33.035 "is_configured": true, 00:16:33.035 "data_offset": 2048, 00:16:33.035 "data_size": 63488 00:16:33.035 }, 00:16:33.035 { 00:16:33.035 "name": null, 00:16:33.035 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:33.035 "is_configured": false, 00:16:33.035 "data_offset": 0, 00:16:33.035 "data_size": 63488 00:16:33.035 }, 00:16:33.035 { 00:16:33.035 "name": null, 00:16:33.035 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:33.035 "is_configured": false, 00:16:33.035 "data_offset": 0, 00:16:33.035 "data_size": 63488 00:16:33.035 } 00:16:33.035 ] 00:16:33.035 }' 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.035 03:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.603 [2024-11-05 03:27:47.195689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.603 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.862 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.862 "name": "Existed_Raid", 00:16:33.862 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:33.862 "strip_size_kb": 64, 00:16:33.862 "state": "configuring", 00:16:33.862 "raid_level": "raid5f", 00:16:33.862 "superblock": true, 00:16:33.862 "num_base_bdevs": 3, 00:16:33.863 "num_base_bdevs_discovered": 2, 00:16:33.863 "num_base_bdevs_operational": 3, 00:16:33.863 "base_bdevs_list": [ 00:16:33.863 { 00:16:33.863 "name": "BaseBdev1", 00:16:33.863 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:33.863 "is_configured": true, 00:16:33.863 "data_offset": 2048, 00:16:33.863 "data_size": 63488 00:16:33.863 }, 00:16:33.863 { 00:16:33.863 "name": null, 00:16:33.863 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:33.863 "is_configured": false, 00:16:33.863 "data_offset": 0, 00:16:33.863 "data_size": 63488 00:16:33.863 }, 00:16:33.863 { 00:16:33.863 "name": "BaseBdev3", 00:16:33.863 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:33.863 "is_configured": true, 00:16:33.863 "data_offset": 2048, 00:16:33.863 "data_size": 63488 00:16:33.863 } 00:16:33.863 ] 00:16:33.863 }' 00:16:33.863 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.863 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.122 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.122 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.122 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.122 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.122 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.383 [2024-11-05 03:27:47.795882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.383 "name": "Existed_Raid", 00:16:34.383 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:34.383 "strip_size_kb": 64, 00:16:34.383 "state": "configuring", 00:16:34.383 "raid_level": "raid5f", 00:16:34.383 "superblock": true, 00:16:34.383 "num_base_bdevs": 3, 00:16:34.383 "num_base_bdevs_discovered": 1, 00:16:34.383 "num_base_bdevs_operational": 3, 00:16:34.383 "base_bdevs_list": [ 00:16:34.383 { 00:16:34.383 "name": null, 00:16:34.383 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:34.383 "is_configured": false, 00:16:34.383 "data_offset": 0, 00:16:34.383 "data_size": 63488 00:16:34.383 }, 00:16:34.383 { 00:16:34.383 "name": null, 00:16:34.383 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:34.383 "is_configured": false, 00:16:34.383 "data_offset": 0, 00:16:34.383 "data_size": 63488 00:16:34.383 }, 00:16:34.383 { 00:16:34.383 "name": "BaseBdev3", 00:16:34.383 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:34.383 "is_configured": true, 00:16:34.383 "data_offset": 2048, 00:16:34.383 "data_size": 63488 00:16:34.383 } 00:16:34.383 ] 00:16:34.383 }' 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.383 03:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:34.950 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.951 [2024-11-05 03:27:48.482647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.951 "name": "Existed_Raid", 00:16:34.951 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:34.951 "strip_size_kb": 64, 00:16:34.951 "state": "configuring", 00:16:34.951 "raid_level": "raid5f", 00:16:34.951 "superblock": true, 00:16:34.951 "num_base_bdevs": 3, 00:16:34.951 "num_base_bdevs_discovered": 2, 00:16:34.951 "num_base_bdevs_operational": 3, 00:16:34.951 "base_bdevs_list": [ 00:16:34.951 { 00:16:34.951 "name": null, 00:16:34.951 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:34.951 "is_configured": false, 00:16:34.951 "data_offset": 0, 00:16:34.951 "data_size": 63488 00:16:34.951 }, 00:16:34.951 { 00:16:34.951 "name": "BaseBdev2", 00:16:34.951 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:34.951 "is_configured": true, 00:16:34.951 "data_offset": 2048, 00:16:34.951 "data_size": 63488 00:16:34.951 }, 00:16:34.951 { 00:16:34.951 "name": "BaseBdev3", 00:16:34.951 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:34.951 "is_configured": true, 00:16:34.951 "data_offset": 2048, 00:16:34.951 "data_size": 63488 00:16:34.951 } 00:16:34.951 ] 00:16:34.951 }' 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.951 03:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c86f0b64-75dd-47fc-a529-a434a1552d7e 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.518 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.777 [2024-11-05 03:27:49.175516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:35.777 [2024-11-05 03:27:49.175768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:35.777 [2024-11-05 03:27:49.175791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:35.777 NewBaseBdev 00:16:35.777 [2024-11-05 03:27:49.176101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.777 [2024-11-05 03:27:49.181201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:35.777 [2024-11-05 03:27:49.181371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:35.777 [2024-11-05 03:27:49.181868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.777 [ 00:16:35.777 { 00:16:35.777 "name": "NewBaseBdev", 00:16:35.777 "aliases": [ 00:16:35.777 "c86f0b64-75dd-47fc-a529-a434a1552d7e" 00:16:35.777 ], 00:16:35.777 "product_name": "Malloc disk", 00:16:35.777 "block_size": 512, 00:16:35.777 "num_blocks": 65536, 00:16:35.777 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:35.777 "assigned_rate_limits": { 00:16:35.777 "rw_ios_per_sec": 0, 00:16:35.777 "rw_mbytes_per_sec": 0, 00:16:35.777 "r_mbytes_per_sec": 0, 00:16:35.777 "w_mbytes_per_sec": 0 00:16:35.777 }, 00:16:35.777 "claimed": true, 00:16:35.777 "claim_type": "exclusive_write", 00:16:35.777 "zoned": false, 00:16:35.777 "supported_io_types": { 00:16:35.777 "read": true, 00:16:35.777 "write": true, 00:16:35.777 "unmap": true, 00:16:35.777 "flush": true, 00:16:35.777 "reset": true, 00:16:35.777 "nvme_admin": false, 00:16:35.777 "nvme_io": false, 00:16:35.777 "nvme_io_md": false, 00:16:35.777 "write_zeroes": true, 00:16:35.777 "zcopy": true, 00:16:35.777 "get_zone_info": false, 00:16:35.777 "zone_management": false, 00:16:35.777 "zone_append": false, 00:16:35.777 "compare": false, 00:16:35.777 "compare_and_write": false, 00:16:35.777 "abort": true, 00:16:35.777 "seek_hole": false, 00:16:35.777 "seek_data": false, 00:16:35.777 "copy": true, 00:16:35.777 "nvme_iov_md": false 00:16:35.777 }, 00:16:35.777 "memory_domains": [ 00:16:35.777 { 00:16:35.777 "dma_device_id": "system", 00:16:35.777 "dma_device_type": 1 00:16:35.777 }, 00:16:35.777 { 00:16:35.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.777 "dma_device_type": 2 00:16:35.777 } 00:16:35.777 ], 00:16:35.777 "driver_specific": {} 00:16:35.777 } 00:16:35.777 ] 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.777 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.777 "name": "Existed_Raid", 00:16:35.777 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:35.777 "strip_size_kb": 64, 00:16:35.777 "state": "online", 00:16:35.777 "raid_level": "raid5f", 00:16:35.777 "superblock": true, 00:16:35.777 "num_base_bdevs": 3, 00:16:35.777 "num_base_bdevs_discovered": 3, 00:16:35.777 "num_base_bdevs_operational": 3, 00:16:35.777 "base_bdevs_list": [ 00:16:35.777 { 00:16:35.777 "name": "NewBaseBdev", 00:16:35.777 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:35.778 "is_configured": true, 00:16:35.778 "data_offset": 2048, 00:16:35.778 "data_size": 63488 00:16:35.778 }, 00:16:35.778 { 00:16:35.778 "name": "BaseBdev2", 00:16:35.778 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:35.778 "is_configured": true, 00:16:35.778 "data_offset": 2048, 00:16:35.778 "data_size": 63488 00:16:35.778 }, 00:16:35.778 { 00:16:35.778 "name": "BaseBdev3", 00:16:35.778 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:35.778 "is_configured": true, 00:16:35.778 "data_offset": 2048, 00:16:35.778 "data_size": 63488 00:16:35.778 } 00:16:35.778 ] 00:16:35.778 }' 00:16:35.778 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.778 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.345 [2024-11-05 03:27:49.767971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.345 "name": "Existed_Raid", 00:16:36.345 "aliases": [ 00:16:36.345 "7f7b691d-634a-401d-8bb0-f54ffb1e1947" 00:16:36.345 ], 00:16:36.345 "product_name": "Raid Volume", 00:16:36.345 "block_size": 512, 00:16:36.345 "num_blocks": 126976, 00:16:36.345 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:36.345 "assigned_rate_limits": { 00:16:36.345 "rw_ios_per_sec": 0, 00:16:36.345 "rw_mbytes_per_sec": 0, 00:16:36.345 "r_mbytes_per_sec": 0, 00:16:36.345 "w_mbytes_per_sec": 0 00:16:36.345 }, 00:16:36.345 "claimed": false, 00:16:36.345 "zoned": false, 00:16:36.345 "supported_io_types": { 00:16:36.345 "read": true, 00:16:36.345 "write": true, 00:16:36.345 "unmap": false, 00:16:36.345 "flush": false, 00:16:36.345 "reset": true, 00:16:36.345 "nvme_admin": false, 00:16:36.345 "nvme_io": false, 00:16:36.345 "nvme_io_md": false, 00:16:36.345 "write_zeroes": true, 00:16:36.345 "zcopy": false, 00:16:36.345 "get_zone_info": false, 00:16:36.345 "zone_management": false, 00:16:36.345 "zone_append": false, 00:16:36.345 "compare": false, 00:16:36.345 "compare_and_write": false, 00:16:36.345 "abort": false, 00:16:36.345 "seek_hole": false, 00:16:36.345 "seek_data": false, 00:16:36.345 "copy": false, 00:16:36.345 "nvme_iov_md": false 00:16:36.345 }, 00:16:36.345 "driver_specific": { 00:16:36.345 "raid": { 00:16:36.345 "uuid": "7f7b691d-634a-401d-8bb0-f54ffb1e1947", 00:16:36.345 "strip_size_kb": 64, 00:16:36.345 "state": "online", 00:16:36.345 "raid_level": "raid5f", 00:16:36.345 "superblock": true, 00:16:36.345 "num_base_bdevs": 3, 00:16:36.345 "num_base_bdevs_discovered": 3, 00:16:36.345 "num_base_bdevs_operational": 3, 00:16:36.345 "base_bdevs_list": [ 00:16:36.345 { 00:16:36.345 "name": "NewBaseBdev", 00:16:36.345 "uuid": "c86f0b64-75dd-47fc-a529-a434a1552d7e", 00:16:36.345 "is_configured": true, 00:16:36.345 "data_offset": 2048, 00:16:36.345 "data_size": 63488 00:16:36.345 }, 00:16:36.345 { 00:16:36.345 "name": "BaseBdev2", 00:16:36.345 "uuid": "a7841b69-5219-4ef7-b72f-8427353ea3ec", 00:16:36.345 "is_configured": true, 00:16:36.345 "data_offset": 2048, 00:16:36.345 "data_size": 63488 00:16:36.345 }, 00:16:36.345 { 00:16:36.345 "name": "BaseBdev3", 00:16:36.345 "uuid": "272c51ff-6681-4d53-9ea5-753415910211", 00:16:36.345 "is_configured": true, 00:16:36.345 "data_offset": 2048, 00:16:36.345 "data_size": 63488 00:16:36.345 } 00:16:36.345 ] 00:16:36.345 } 00:16:36.345 } 00:16:36.345 }' 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:36.345 BaseBdev2 00:16:36.345 BaseBdev3' 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.345 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.604 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.604 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.604 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.604 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:36.604 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.604 03:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.604 03:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.604 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.605 [2024-11-05 03:27:50.095820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.605 [2024-11-05 03:27:50.095857] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.605 [2024-11-05 03:27:50.095960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.605 [2024-11-05 03:27:50.096417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.605 [2024-11-05 03:27:50.096442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80601 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80601 ']' 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80601 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80601 00:16:36.605 killing process with pid 80601 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80601' 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80601 00:16:36.605 [2024-11-05 03:27:50.135741] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.605 03:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80601 00:16:36.864 [2024-11-05 03:27:50.417494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.241 ************************************ 00:16:38.241 END TEST raid5f_state_function_test_sb 00:16:38.241 ************************************ 00:16:38.241 03:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:38.241 00:16:38.241 real 0m12.286s 00:16:38.241 user 0m20.382s 00:16:38.241 sys 0m1.770s 00:16:38.241 03:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:38.241 03:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.241 03:27:51 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:38.241 03:27:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:38.241 03:27:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:38.241 03:27:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.241 ************************************ 00:16:38.241 START TEST raid5f_superblock_test 00:16:38.241 ************************************ 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:38.241 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81234 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81234 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81234 ']' 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:38.242 03:27:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.242 [2024-11-05 03:27:51.662509] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:16:38.242 [2024-11-05 03:27:51.662988] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81234 ] 00:16:38.242 [2024-11-05 03:27:51.847550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.500 [2024-11-05 03:27:51.981020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.759 [2024-11-05 03:27:52.188869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.759 [2024-11-05 03:27:52.188916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.327 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.328 malloc1 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.328 [2024-11-05 03:27:52.748266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.328 [2024-11-05 03:27:52.748525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.328 [2024-11-05 03:27:52.748572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:39.328 [2024-11-05 03:27:52.748589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.328 [2024-11-05 03:27:52.751374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.328 [2024-11-05 03:27:52.751423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.328 pt1 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.328 malloc2 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.328 [2024-11-05 03:27:52.799947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:39.328 [2024-11-05 03:27:52.800152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.328 [2024-11-05 03:27:52.800194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:39.328 [2024-11-05 03:27:52.800210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.328 [2024-11-05 03:27:52.803022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.328 [2024-11-05 03:27:52.803068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:39.328 pt2 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.328 malloc3 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.328 [2024-11-05 03:27:52.863376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:39.328 [2024-11-05 03:27:52.863447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.328 [2024-11-05 03:27:52.863480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:39.328 [2024-11-05 03:27:52.863495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.328 [2024-11-05 03:27:52.866290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.328 [2024-11-05 03:27:52.866365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:39.328 pt3 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.328 [2024-11-05 03:27:52.871443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:39.328 [2024-11-05 03:27:52.873791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:39.328 [2024-11-05 03:27:52.873879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:39.328 [2024-11-05 03:27:52.874102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:39.328 [2024-11-05 03:27:52.874133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:39.328 [2024-11-05 03:27:52.874505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:39.328 [2024-11-05 03:27:52.879838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:39.328 [2024-11-05 03:27:52.879866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:39.328 [2024-11-05 03:27:52.880121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.328 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.329 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.329 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.329 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.329 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.329 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.329 "name": "raid_bdev1", 00:16:39.329 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:39.329 "strip_size_kb": 64, 00:16:39.329 "state": "online", 00:16:39.329 "raid_level": "raid5f", 00:16:39.329 "superblock": true, 00:16:39.329 "num_base_bdevs": 3, 00:16:39.329 "num_base_bdevs_discovered": 3, 00:16:39.329 "num_base_bdevs_operational": 3, 00:16:39.329 "base_bdevs_list": [ 00:16:39.329 { 00:16:39.329 "name": "pt1", 00:16:39.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.329 "is_configured": true, 00:16:39.329 "data_offset": 2048, 00:16:39.329 "data_size": 63488 00:16:39.329 }, 00:16:39.329 { 00:16:39.329 "name": "pt2", 00:16:39.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.329 "is_configured": true, 00:16:39.329 "data_offset": 2048, 00:16:39.329 "data_size": 63488 00:16:39.329 }, 00:16:39.329 { 00:16:39.329 "name": "pt3", 00:16:39.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.329 "is_configured": true, 00:16:39.329 "data_offset": 2048, 00:16:39.329 "data_size": 63488 00:16:39.329 } 00:16:39.329 ] 00:16:39.329 }' 00:16:39.329 03:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.329 03:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.896 [2024-11-05 03:27:53.410325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.896 "name": "raid_bdev1", 00:16:39.896 "aliases": [ 00:16:39.896 "eaedaae1-176d-43e0-9ff3-d44174100a77" 00:16:39.896 ], 00:16:39.896 "product_name": "Raid Volume", 00:16:39.896 "block_size": 512, 00:16:39.896 "num_blocks": 126976, 00:16:39.896 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:39.896 "assigned_rate_limits": { 00:16:39.896 "rw_ios_per_sec": 0, 00:16:39.896 "rw_mbytes_per_sec": 0, 00:16:39.896 "r_mbytes_per_sec": 0, 00:16:39.896 "w_mbytes_per_sec": 0 00:16:39.896 }, 00:16:39.896 "claimed": false, 00:16:39.896 "zoned": false, 00:16:39.896 "supported_io_types": { 00:16:39.896 "read": true, 00:16:39.896 "write": true, 00:16:39.896 "unmap": false, 00:16:39.896 "flush": false, 00:16:39.896 "reset": true, 00:16:39.896 "nvme_admin": false, 00:16:39.896 "nvme_io": false, 00:16:39.896 "nvme_io_md": false, 00:16:39.896 "write_zeroes": true, 00:16:39.896 "zcopy": false, 00:16:39.896 "get_zone_info": false, 00:16:39.896 "zone_management": false, 00:16:39.896 "zone_append": false, 00:16:39.896 "compare": false, 00:16:39.896 "compare_and_write": false, 00:16:39.896 "abort": false, 00:16:39.896 "seek_hole": false, 00:16:39.896 "seek_data": false, 00:16:39.896 "copy": false, 00:16:39.896 "nvme_iov_md": false 00:16:39.896 }, 00:16:39.896 "driver_specific": { 00:16:39.896 "raid": { 00:16:39.896 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:39.896 "strip_size_kb": 64, 00:16:39.896 "state": "online", 00:16:39.896 "raid_level": "raid5f", 00:16:39.896 "superblock": true, 00:16:39.896 "num_base_bdevs": 3, 00:16:39.896 "num_base_bdevs_discovered": 3, 00:16:39.896 "num_base_bdevs_operational": 3, 00:16:39.896 "base_bdevs_list": [ 00:16:39.896 { 00:16:39.896 "name": "pt1", 00:16:39.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.896 "is_configured": true, 00:16:39.896 "data_offset": 2048, 00:16:39.896 "data_size": 63488 00:16:39.896 }, 00:16:39.896 { 00:16:39.896 "name": "pt2", 00:16:39.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.896 "is_configured": true, 00:16:39.896 "data_offset": 2048, 00:16:39.896 "data_size": 63488 00:16:39.896 }, 00:16:39.896 { 00:16:39.896 "name": "pt3", 00:16:39.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.896 "is_configured": true, 00:16:39.896 "data_offset": 2048, 00:16:39.896 "data_size": 63488 00:16:39.896 } 00:16:39.896 ] 00:16:39.896 } 00:16:39.896 } 00:16:39.896 }' 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.896 pt2 00:16:39.896 pt3' 00:16:39.896 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.155 [2024-11-05 03:27:53.710376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eaedaae1-176d-43e0-9ff3-d44174100a77 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eaedaae1-176d-43e0-9ff3-d44174100a77 ']' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.155 [2024-11-05 03:27:53.762136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.155 [2024-11-05 03:27:53.762310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.155 [2024-11-05 03:27:53.762429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.155 [2024-11-05 03:27:53.762527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.155 [2024-11-05 03:27:53.762544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:40.155 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.414 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:40.414 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:40.414 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 [2024-11-05 03:27:53.906245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:40.415 [2024-11-05 03:27:53.908722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:40.415 [2024-11-05 03:27:53.908792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:40.415 [2024-11-05 03:27:53.908867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:40.415 [2024-11-05 03:27:53.908941] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:40.415 [2024-11-05 03:27:53.908977] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:40.415 [2024-11-05 03:27:53.909006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.415 [2024-11-05 03:27:53.909021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:40.415 request: 00:16:40.415 { 00:16:40.415 "name": "raid_bdev1", 00:16:40.415 "raid_level": "raid5f", 00:16:40.415 "base_bdevs": [ 00:16:40.415 "malloc1", 00:16:40.415 "malloc2", 00:16:40.415 "malloc3" 00:16:40.415 ], 00:16:40.415 "strip_size_kb": 64, 00:16:40.415 "superblock": false, 00:16:40.415 "method": "bdev_raid_create", 00:16:40.415 "req_id": 1 00:16:40.415 } 00:16:40.415 Got JSON-RPC error response 00:16:40.415 response: 00:16:40.415 { 00:16:40.415 "code": -17, 00:16:40.415 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:40.415 } 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 [2024-11-05 03:27:53.974204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:40.415 [2024-11-05 03:27:53.974283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.415 [2024-11-05 03:27:53.974325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:40.415 [2024-11-05 03:27:53.974342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.415 [2024-11-05 03:27:53.977136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.415 [2024-11-05 03:27:53.977319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:40.415 [2024-11-05 03:27:53.977440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:40.415 [2024-11-05 03:27:53.977509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:40.415 pt1 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.415 03:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.415 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.415 "name": "raid_bdev1", 00:16:40.415 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:40.415 "strip_size_kb": 64, 00:16:40.415 "state": "configuring", 00:16:40.415 "raid_level": "raid5f", 00:16:40.415 "superblock": true, 00:16:40.415 "num_base_bdevs": 3, 00:16:40.415 "num_base_bdevs_discovered": 1, 00:16:40.415 "num_base_bdevs_operational": 3, 00:16:40.415 "base_bdevs_list": [ 00:16:40.415 { 00:16:40.415 "name": "pt1", 00:16:40.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.415 "is_configured": true, 00:16:40.415 "data_offset": 2048, 00:16:40.415 "data_size": 63488 00:16:40.415 }, 00:16:40.415 { 00:16:40.415 "name": null, 00:16:40.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.415 "is_configured": false, 00:16:40.415 "data_offset": 2048, 00:16:40.415 "data_size": 63488 00:16:40.415 }, 00:16:40.415 { 00:16:40.415 "name": null, 00:16:40.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.415 "is_configured": false, 00:16:40.415 "data_offset": 2048, 00:16:40.415 "data_size": 63488 00:16:40.415 } 00:16:40.415 ] 00:16:40.415 }' 00:16:40.415 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.415 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 [2024-11-05 03:27:54.518360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.984 [2024-11-05 03:27:54.518448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.984 [2024-11-05 03:27:54.518489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:40.984 [2024-11-05 03:27:54.518508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.984 [2024-11-05 03:27:54.519171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.984 [2024-11-05 03:27:54.519221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.984 [2024-11-05 03:27:54.519376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.984 [2024-11-05 03:27:54.519418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.984 pt2 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 [2024-11-05 03:27:54.526347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.984 "name": "raid_bdev1", 00:16:40.984 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:40.984 "strip_size_kb": 64, 00:16:40.984 "state": "configuring", 00:16:40.984 "raid_level": "raid5f", 00:16:40.984 "superblock": true, 00:16:40.984 "num_base_bdevs": 3, 00:16:40.984 "num_base_bdevs_discovered": 1, 00:16:40.984 "num_base_bdevs_operational": 3, 00:16:40.984 "base_bdevs_list": [ 00:16:40.984 { 00:16:40.984 "name": "pt1", 00:16:40.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.984 "is_configured": true, 00:16:40.984 "data_offset": 2048, 00:16:40.984 "data_size": 63488 00:16:40.984 }, 00:16:40.984 { 00:16:40.984 "name": null, 00:16:40.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.984 "is_configured": false, 00:16:40.984 "data_offset": 0, 00:16:40.984 "data_size": 63488 00:16:40.984 }, 00:16:40.984 { 00:16:40.984 "name": null, 00:16:40.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.984 "is_configured": false, 00:16:40.984 "data_offset": 2048, 00:16:40.984 "data_size": 63488 00:16:40.984 } 00:16:40.984 ] 00:16:40.984 }' 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.984 03:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.552 [2024-11-05 03:27:55.014433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.552 [2024-11-05 03:27:55.014658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.552 [2024-11-05 03:27:55.014696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:41.552 [2024-11-05 03:27:55.014715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.552 [2024-11-05 03:27:55.015273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.552 [2024-11-05 03:27:55.015326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.552 [2024-11-05 03:27:55.015428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.552 [2024-11-05 03:27:55.015466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.552 pt2 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.552 [2024-11-05 03:27:55.022430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.552 [2024-11-05 03:27:55.022491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.552 [2024-11-05 03:27:55.022514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:41.552 [2024-11-05 03:27:55.022531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.552 [2024-11-05 03:27:55.023011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.552 [2024-11-05 03:27:55.023050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.552 [2024-11-05 03:27:55.023133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:41.552 [2024-11-05 03:27:55.023168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.552 [2024-11-05 03:27:55.023348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:41.552 [2024-11-05 03:27:55.023370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:41.552 [2024-11-05 03:27:55.023666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:41.552 [2024-11-05 03:27:55.028508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:41.552 [2024-11-05 03:27:55.028533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:41.552 [2024-11-05 03:27:55.028783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.552 pt3 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.552 "name": "raid_bdev1", 00:16:41.552 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:41.552 "strip_size_kb": 64, 00:16:41.552 "state": "online", 00:16:41.552 "raid_level": "raid5f", 00:16:41.552 "superblock": true, 00:16:41.552 "num_base_bdevs": 3, 00:16:41.552 "num_base_bdevs_discovered": 3, 00:16:41.552 "num_base_bdevs_operational": 3, 00:16:41.552 "base_bdevs_list": [ 00:16:41.552 { 00:16:41.552 "name": "pt1", 00:16:41.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.552 "is_configured": true, 00:16:41.552 "data_offset": 2048, 00:16:41.552 "data_size": 63488 00:16:41.552 }, 00:16:41.552 { 00:16:41.552 "name": "pt2", 00:16:41.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.552 "is_configured": true, 00:16:41.552 "data_offset": 2048, 00:16:41.552 "data_size": 63488 00:16:41.552 }, 00:16:41.552 { 00:16:41.552 "name": "pt3", 00:16:41.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.552 "is_configured": true, 00:16:41.552 "data_offset": 2048, 00:16:41.552 "data_size": 63488 00:16:41.552 } 00:16:41.552 ] 00:16:41.552 }' 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.552 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.120 [2024-11-05 03:27:55.550696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.120 "name": "raid_bdev1", 00:16:42.120 "aliases": [ 00:16:42.120 "eaedaae1-176d-43e0-9ff3-d44174100a77" 00:16:42.120 ], 00:16:42.120 "product_name": "Raid Volume", 00:16:42.120 "block_size": 512, 00:16:42.120 "num_blocks": 126976, 00:16:42.120 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:42.120 "assigned_rate_limits": { 00:16:42.120 "rw_ios_per_sec": 0, 00:16:42.120 "rw_mbytes_per_sec": 0, 00:16:42.120 "r_mbytes_per_sec": 0, 00:16:42.120 "w_mbytes_per_sec": 0 00:16:42.120 }, 00:16:42.120 "claimed": false, 00:16:42.120 "zoned": false, 00:16:42.120 "supported_io_types": { 00:16:42.120 "read": true, 00:16:42.120 "write": true, 00:16:42.120 "unmap": false, 00:16:42.120 "flush": false, 00:16:42.120 "reset": true, 00:16:42.120 "nvme_admin": false, 00:16:42.120 "nvme_io": false, 00:16:42.120 "nvme_io_md": false, 00:16:42.120 "write_zeroes": true, 00:16:42.120 "zcopy": false, 00:16:42.120 "get_zone_info": false, 00:16:42.120 "zone_management": false, 00:16:42.120 "zone_append": false, 00:16:42.120 "compare": false, 00:16:42.120 "compare_and_write": false, 00:16:42.120 "abort": false, 00:16:42.120 "seek_hole": false, 00:16:42.120 "seek_data": false, 00:16:42.120 "copy": false, 00:16:42.120 "nvme_iov_md": false 00:16:42.120 }, 00:16:42.120 "driver_specific": { 00:16:42.120 "raid": { 00:16:42.120 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:42.120 "strip_size_kb": 64, 00:16:42.120 "state": "online", 00:16:42.120 "raid_level": "raid5f", 00:16:42.120 "superblock": true, 00:16:42.120 "num_base_bdevs": 3, 00:16:42.120 "num_base_bdevs_discovered": 3, 00:16:42.120 "num_base_bdevs_operational": 3, 00:16:42.120 "base_bdevs_list": [ 00:16:42.120 { 00:16:42.120 "name": "pt1", 00:16:42.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.120 "is_configured": true, 00:16:42.120 "data_offset": 2048, 00:16:42.120 "data_size": 63488 00:16:42.120 }, 00:16:42.120 { 00:16:42.120 "name": "pt2", 00:16:42.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.120 "is_configured": true, 00:16:42.120 "data_offset": 2048, 00:16:42.120 "data_size": 63488 00:16:42.120 }, 00:16:42.120 { 00:16:42.120 "name": "pt3", 00:16:42.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.120 "is_configured": true, 00:16:42.120 "data_offset": 2048, 00:16:42.120 "data_size": 63488 00:16:42.120 } 00:16:42.120 ] 00:16:42.120 } 00:16:42.120 } 00:16:42.120 }' 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:42.120 pt2 00:16:42.120 pt3' 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.120 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:42.121 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.121 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.121 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.121 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.380 [2024-11-05 03:27:55.902770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eaedaae1-176d-43e0-9ff3-d44174100a77 '!=' eaedaae1-176d-43e0-9ff3-d44174100a77 ']' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.380 [2024-11-05 03:27:55.946678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.380 03:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.380 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.380 "name": "raid_bdev1", 00:16:42.380 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:42.380 "strip_size_kb": 64, 00:16:42.380 "state": "online", 00:16:42.380 "raid_level": "raid5f", 00:16:42.380 "superblock": true, 00:16:42.380 "num_base_bdevs": 3, 00:16:42.380 "num_base_bdevs_discovered": 2, 00:16:42.380 "num_base_bdevs_operational": 2, 00:16:42.380 "base_bdevs_list": [ 00:16:42.380 { 00:16:42.380 "name": null, 00:16:42.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.380 "is_configured": false, 00:16:42.380 "data_offset": 0, 00:16:42.380 "data_size": 63488 00:16:42.380 }, 00:16:42.380 { 00:16:42.380 "name": "pt2", 00:16:42.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.380 "is_configured": true, 00:16:42.380 "data_offset": 2048, 00:16:42.380 "data_size": 63488 00:16:42.380 }, 00:16:42.380 { 00:16:42.380 "name": "pt3", 00:16:42.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.380 "is_configured": true, 00:16:42.380 "data_offset": 2048, 00:16:42.380 "data_size": 63488 00:16:42.380 } 00:16:42.380 ] 00:16:42.380 }' 00:16:42.380 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.380 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 [2024-11-05 03:27:56.438726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.949 [2024-11-05 03:27:56.438763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.949 [2024-11-05 03:27:56.438860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.949 [2024-11-05 03:27:56.438938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.949 [2024-11-05 03:27:56.438961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 [2024-11-05 03:27:56.522711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:42.949 [2024-11-05 03:27:56.522788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.949 [2024-11-05 03:27:56.522814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:42.949 [2024-11-05 03:27:56.522831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.949 [2024-11-05 03:27:56.525634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.949 [2024-11-05 03:27:56.525842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:42.949 [2024-11-05 03:27:56.525957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:42.949 [2024-11-05 03:27:56.526023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.949 pt2 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.949 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.949 "name": "raid_bdev1", 00:16:42.949 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:42.949 "strip_size_kb": 64, 00:16:42.949 "state": "configuring", 00:16:42.949 "raid_level": "raid5f", 00:16:42.949 "superblock": true, 00:16:42.949 "num_base_bdevs": 3, 00:16:42.949 "num_base_bdevs_discovered": 1, 00:16:42.949 "num_base_bdevs_operational": 2, 00:16:42.949 "base_bdevs_list": [ 00:16:42.949 { 00:16:42.949 "name": null, 00:16:42.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.949 "is_configured": false, 00:16:42.949 "data_offset": 2048, 00:16:42.949 "data_size": 63488 00:16:42.949 }, 00:16:42.949 { 00:16:42.949 "name": "pt2", 00:16:42.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.949 "is_configured": true, 00:16:42.950 "data_offset": 2048, 00:16:42.950 "data_size": 63488 00:16:42.950 }, 00:16:42.950 { 00:16:42.950 "name": null, 00:16:42.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.950 "is_configured": false, 00:16:42.950 "data_offset": 2048, 00:16:42.950 "data_size": 63488 00:16:42.950 } 00:16:42.950 ] 00:16:42.950 }' 00:16:42.950 03:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.950 03:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.520 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:43.520 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:43.520 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:43.520 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.520 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.520 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.520 [2024-11-05 03:27:57.046840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.520 [2024-11-05 03:27:57.046925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.521 [2024-11-05 03:27:57.046957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:43.521 [2024-11-05 03:27:57.046976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.521 [2024-11-05 03:27:57.047591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.521 [2024-11-05 03:27:57.047625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.521 [2024-11-05 03:27:57.047722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:43.521 [2024-11-05 03:27:57.047769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.521 [2024-11-05 03:27:57.047918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:43.521 [2024-11-05 03:27:57.047939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:43.521 [2024-11-05 03:27:57.048245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:43.521 [2024-11-05 03:27:57.053123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:43.521 [2024-11-05 03:27:57.053286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:43.521 [2024-11-05 03:27:57.053739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.521 pt3 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.521 "name": "raid_bdev1", 00:16:43.521 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:43.521 "strip_size_kb": 64, 00:16:43.521 "state": "online", 00:16:43.521 "raid_level": "raid5f", 00:16:43.521 "superblock": true, 00:16:43.521 "num_base_bdevs": 3, 00:16:43.521 "num_base_bdevs_discovered": 2, 00:16:43.521 "num_base_bdevs_operational": 2, 00:16:43.521 "base_bdevs_list": [ 00:16:43.521 { 00:16:43.521 "name": null, 00:16:43.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.521 "is_configured": false, 00:16:43.521 "data_offset": 2048, 00:16:43.521 "data_size": 63488 00:16:43.521 }, 00:16:43.521 { 00:16:43.521 "name": "pt2", 00:16:43.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.521 "is_configured": true, 00:16:43.521 "data_offset": 2048, 00:16:43.521 "data_size": 63488 00:16:43.521 }, 00:16:43.521 { 00:16:43.521 "name": "pt3", 00:16:43.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.521 "is_configured": true, 00:16:43.521 "data_offset": 2048, 00:16:43.521 "data_size": 63488 00:16:43.521 } 00:16:43.521 ] 00:16:43.521 }' 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.521 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.089 [2024-11-05 03:27:57.623436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.089 [2024-11-05 03:27:57.623477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.089 [2024-11-05 03:27:57.623571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.089 [2024-11-05 03:27:57.623653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.089 [2024-11-05 03:27:57.623670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.089 [2024-11-05 03:27:57.691490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:44.089 [2024-11-05 03:27:57.691566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.089 [2024-11-05 03:27:57.691595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:44.089 [2024-11-05 03:27:57.691610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.089 [2024-11-05 03:27:57.694492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.089 [2024-11-05 03:27:57.694537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:44.089 [2024-11-05 03:27:57.694641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:44.089 [2024-11-05 03:27:57.694700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.089 [2024-11-05 03:27:57.694867] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:44.089 [2024-11-05 03:27:57.694895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.089 [2024-11-05 03:27:57.694920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:44.089 [2024-11-05 03:27:57.694997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.089 pt1 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.089 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.348 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.348 "name": "raid_bdev1", 00:16:44.348 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:44.348 "strip_size_kb": 64, 00:16:44.348 "state": "configuring", 00:16:44.348 "raid_level": "raid5f", 00:16:44.348 "superblock": true, 00:16:44.348 "num_base_bdevs": 3, 00:16:44.348 "num_base_bdevs_discovered": 1, 00:16:44.348 "num_base_bdevs_operational": 2, 00:16:44.348 "base_bdevs_list": [ 00:16:44.348 { 00:16:44.348 "name": null, 00:16:44.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.348 "is_configured": false, 00:16:44.348 "data_offset": 2048, 00:16:44.348 "data_size": 63488 00:16:44.348 }, 00:16:44.348 { 00:16:44.348 "name": "pt2", 00:16:44.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.348 "is_configured": true, 00:16:44.348 "data_offset": 2048, 00:16:44.348 "data_size": 63488 00:16:44.348 }, 00:16:44.348 { 00:16:44.348 "name": null, 00:16:44.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.348 "is_configured": false, 00:16:44.348 "data_offset": 2048, 00:16:44.348 "data_size": 63488 00:16:44.348 } 00:16:44.348 ] 00:16:44.348 }' 00:16:44.348 03:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.348 03:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.607 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:44.607 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:44.607 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.607 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.607 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.866 [2024-11-05 03:27:58.259654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:44.866 [2024-11-05 03:27:58.259732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.866 [2024-11-05 03:27:58.259764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:44.866 [2024-11-05 03:27:58.259779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.866 [2024-11-05 03:27:58.260371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.866 [2024-11-05 03:27:58.260406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:44.866 [2024-11-05 03:27:58.260520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:44.866 [2024-11-05 03:27:58.260553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.866 [2024-11-05 03:27:58.260706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:44.866 [2024-11-05 03:27:58.260731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:44.866 [2024-11-05 03:27:58.261034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:44.866 [2024-11-05 03:27:58.266041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:44.866 [2024-11-05 03:27:58.266078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:44.866 [2024-11-05 03:27:58.266398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.866 pt3 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.866 "name": "raid_bdev1", 00:16:44.866 "uuid": "eaedaae1-176d-43e0-9ff3-d44174100a77", 00:16:44.866 "strip_size_kb": 64, 00:16:44.866 "state": "online", 00:16:44.866 "raid_level": "raid5f", 00:16:44.866 "superblock": true, 00:16:44.866 "num_base_bdevs": 3, 00:16:44.866 "num_base_bdevs_discovered": 2, 00:16:44.866 "num_base_bdevs_operational": 2, 00:16:44.866 "base_bdevs_list": [ 00:16:44.866 { 00:16:44.866 "name": null, 00:16:44.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.866 "is_configured": false, 00:16:44.866 "data_offset": 2048, 00:16:44.866 "data_size": 63488 00:16:44.866 }, 00:16:44.866 { 00:16:44.866 "name": "pt2", 00:16:44.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.866 "is_configured": true, 00:16:44.866 "data_offset": 2048, 00:16:44.866 "data_size": 63488 00:16:44.866 }, 00:16:44.866 { 00:16:44.866 "name": "pt3", 00:16:44.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.866 "is_configured": true, 00:16:44.866 "data_offset": 2048, 00:16:44.866 "data_size": 63488 00:16:44.866 } 00:16:44.866 ] 00:16:44.866 }' 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.866 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:45.435 [2024-11-05 03:27:58.820392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eaedaae1-176d-43e0-9ff3-d44174100a77 '!=' eaedaae1-176d-43e0-9ff3-d44174100a77 ']' 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81234 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81234 ']' 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81234 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81234 00:16:45.435 killing process with pid 81234 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81234' 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81234 00:16:45.435 [2024-11-05 03:27:58.898678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.435 03:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81234 00:16:45.435 [2024-11-05 03:27:58.898795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.435 [2024-11-05 03:27:58.898885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.435 [2024-11-05 03:27:58.898905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:45.694 [2024-11-05 03:27:59.172686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.630 03:28:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:46.630 00:16:46.630 real 0m8.657s 00:16:46.630 user 0m14.157s 00:16:46.630 sys 0m1.242s 00:16:46.630 03:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:46.630 03:28:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.630 ************************************ 00:16:46.630 END TEST raid5f_superblock_test 00:16:46.630 ************************************ 00:16:46.630 03:28:00 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:46.630 03:28:00 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:46.630 03:28:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:46.630 03:28:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:46.630 03:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.630 ************************************ 00:16:46.630 START TEST raid5f_rebuild_test 00:16:46.630 ************************************ 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81686 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81686 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81686 ']' 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.630 03:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.889 [2024-11-05 03:28:00.385096] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:16:46.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:46.889 Zero copy mechanism will not be used. 00:16:46.889 [2024-11-05 03:28:00.385352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81686 ] 00:16:47.148 [2024-11-05 03:28:00.576997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.148 [2024-11-05 03:28:00.753355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.408 [2024-11-05 03:28:01.016779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.408 [2024-11-05 03:28:01.016847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 BaseBdev1_malloc 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 [2024-11-05 03:28:01.507804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.978 [2024-11-05 03:28:01.507883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.978 [2024-11-05 03:28:01.507925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.978 [2024-11-05 03:28:01.507945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.978 [2024-11-05 03:28:01.510651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.978 [2024-11-05 03:28:01.510703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.978 BaseBdev1 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 BaseBdev2_malloc 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 [2024-11-05 03:28:01.555627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:47.978 [2024-11-05 03:28:01.555702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.978 [2024-11-05 03:28:01.555727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.978 [2024-11-05 03:28:01.555745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.978 [2024-11-05 03:28:01.558485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.978 [2024-11-05 03:28:01.558535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.978 BaseBdev2 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.978 BaseBdev3_malloc 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.978 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.237 [2024-11-05 03:28:01.617777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:48.237 [2024-11-05 03:28:01.617847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.237 [2024-11-05 03:28:01.617878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:48.237 [2024-11-05 03:28:01.617895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.237 [2024-11-05 03:28:01.620597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.237 [2024-11-05 03:28:01.620645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:48.237 BaseBdev3 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.237 spare_malloc 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.237 spare_delay 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.237 [2024-11-05 03:28:01.677806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:48.237 [2024-11-05 03:28:01.677871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.237 [2024-11-05 03:28:01.677899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:48.237 [2024-11-05 03:28:01.677915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.237 [2024-11-05 03:28:01.680701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.237 [2024-11-05 03:28:01.680751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:48.237 spare 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.237 [2024-11-05 03:28:01.685890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.237 [2024-11-05 03:28:01.688265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.237 [2024-11-05 03:28:01.688398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.237 [2024-11-05 03:28:01.688513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:48.237 [2024-11-05 03:28:01.688530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:48.237 [2024-11-05 03:28:01.688846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:48.237 [2024-11-05 03:28:01.693999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:48.237 [2024-11-05 03:28:01.694036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:48.237 [2024-11-05 03:28:01.694278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.237 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.238 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.238 "name": "raid_bdev1", 00:16:48.238 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:48.238 "strip_size_kb": 64, 00:16:48.238 "state": "online", 00:16:48.238 "raid_level": "raid5f", 00:16:48.238 "superblock": false, 00:16:48.238 "num_base_bdevs": 3, 00:16:48.238 "num_base_bdevs_discovered": 3, 00:16:48.238 "num_base_bdevs_operational": 3, 00:16:48.238 "base_bdevs_list": [ 00:16:48.238 { 00:16:48.238 "name": "BaseBdev1", 00:16:48.238 "uuid": "286be4e3-fb69-5fae-a64b-4cc0cc8dc50e", 00:16:48.238 "is_configured": true, 00:16:48.238 "data_offset": 0, 00:16:48.238 "data_size": 65536 00:16:48.238 }, 00:16:48.238 { 00:16:48.238 "name": "BaseBdev2", 00:16:48.238 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:48.238 "is_configured": true, 00:16:48.238 "data_offset": 0, 00:16:48.238 "data_size": 65536 00:16:48.238 }, 00:16:48.238 { 00:16:48.238 "name": "BaseBdev3", 00:16:48.238 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:48.238 "is_configured": true, 00:16:48.238 "data_offset": 0, 00:16:48.238 "data_size": 65536 00:16:48.238 } 00:16:48.238 ] 00:16:48.238 }' 00:16:48.238 03:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.238 03:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.805 [2024-11-05 03:28:02.200235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:48.805 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:49.064 [2024-11-05 03:28:02.540182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:49.064 /dev/nbd0 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.064 1+0 records in 00:16:49.064 1+0 records out 00:16:49.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270331 s, 15.2 MB/s 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:49.064 03:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:49.631 512+0 records in 00:16:49.631 512+0 records out 00:16:49.631 67108864 bytes (67 MB, 64 MiB) copied, 0.527259 s, 127 MB/s 00:16:49.631 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:49.632 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.632 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:49.632 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.632 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:49.632 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.632 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.890 [2024-11-05 03:28:03.471198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.890 [2024-11-05 03:28:03.485098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.890 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.149 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.149 "name": "raid_bdev1", 00:16:50.149 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:50.149 "strip_size_kb": 64, 00:16:50.149 "state": "online", 00:16:50.149 "raid_level": "raid5f", 00:16:50.149 "superblock": false, 00:16:50.149 "num_base_bdevs": 3, 00:16:50.149 "num_base_bdevs_discovered": 2, 00:16:50.149 "num_base_bdevs_operational": 2, 00:16:50.149 "base_bdevs_list": [ 00:16:50.149 { 00:16:50.149 "name": null, 00:16:50.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.149 "is_configured": false, 00:16:50.149 "data_offset": 0, 00:16:50.149 "data_size": 65536 00:16:50.149 }, 00:16:50.149 { 00:16:50.149 "name": "BaseBdev2", 00:16:50.149 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:50.149 "is_configured": true, 00:16:50.149 "data_offset": 0, 00:16:50.149 "data_size": 65536 00:16:50.149 }, 00:16:50.149 { 00:16:50.149 "name": "BaseBdev3", 00:16:50.149 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:50.149 "is_configured": true, 00:16:50.149 "data_offset": 0, 00:16:50.149 "data_size": 65536 00:16:50.149 } 00:16:50.149 ] 00:16:50.149 }' 00:16:50.149 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.149 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.409 03:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:50.409 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.409 03:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.409 [2024-11-05 03:28:04.005266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.409 [2024-11-05 03:28:04.021250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:50.409 03:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.409 03:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:50.409 [2024-11-05 03:28:04.028932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.787 "name": "raid_bdev1", 00:16:51.787 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:51.787 "strip_size_kb": 64, 00:16:51.787 "state": "online", 00:16:51.787 "raid_level": "raid5f", 00:16:51.787 "superblock": false, 00:16:51.787 "num_base_bdevs": 3, 00:16:51.787 "num_base_bdevs_discovered": 3, 00:16:51.787 "num_base_bdevs_operational": 3, 00:16:51.787 "process": { 00:16:51.787 "type": "rebuild", 00:16:51.787 "target": "spare", 00:16:51.787 "progress": { 00:16:51.787 "blocks": 18432, 00:16:51.787 "percent": 14 00:16:51.787 } 00:16:51.787 }, 00:16:51.787 "base_bdevs_list": [ 00:16:51.787 { 00:16:51.787 "name": "spare", 00:16:51.787 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:51.787 "is_configured": true, 00:16:51.787 "data_offset": 0, 00:16:51.787 "data_size": 65536 00:16:51.787 }, 00:16:51.787 { 00:16:51.787 "name": "BaseBdev2", 00:16:51.787 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:51.787 "is_configured": true, 00:16:51.787 "data_offset": 0, 00:16:51.787 "data_size": 65536 00:16:51.787 }, 00:16:51.787 { 00:16:51.787 "name": "BaseBdev3", 00:16:51.787 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:51.787 "is_configured": true, 00:16:51.787 "data_offset": 0, 00:16:51.787 "data_size": 65536 00:16:51.787 } 00:16:51.787 ] 00:16:51.787 }' 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.787 [2024-11-05 03:28:05.198096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.787 [2024-11-05 03:28:05.242085] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:51.787 [2024-11-05 03:28:05.242185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.787 [2024-11-05 03:28:05.242212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.787 [2024-11-05 03:28:05.242222] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.787 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.787 "name": "raid_bdev1", 00:16:51.787 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:51.787 "strip_size_kb": 64, 00:16:51.787 "state": "online", 00:16:51.787 "raid_level": "raid5f", 00:16:51.787 "superblock": false, 00:16:51.787 "num_base_bdevs": 3, 00:16:51.787 "num_base_bdevs_discovered": 2, 00:16:51.787 "num_base_bdevs_operational": 2, 00:16:51.787 "base_bdevs_list": [ 00:16:51.787 { 00:16:51.787 "name": null, 00:16:51.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.787 "is_configured": false, 00:16:51.787 "data_offset": 0, 00:16:51.787 "data_size": 65536 00:16:51.787 }, 00:16:51.787 { 00:16:51.787 "name": "BaseBdev2", 00:16:51.787 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:51.788 "is_configured": true, 00:16:51.788 "data_offset": 0, 00:16:51.788 "data_size": 65536 00:16:51.788 }, 00:16:51.788 { 00:16:51.788 "name": "BaseBdev3", 00:16:51.788 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:51.788 "is_configured": true, 00:16:51.788 "data_offset": 0, 00:16:51.788 "data_size": 65536 00:16:51.788 } 00:16:51.788 ] 00:16:51.788 }' 00:16:51.788 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.788 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.355 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.356 "name": "raid_bdev1", 00:16:52.356 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:52.356 "strip_size_kb": 64, 00:16:52.356 "state": "online", 00:16:52.356 "raid_level": "raid5f", 00:16:52.356 "superblock": false, 00:16:52.356 "num_base_bdevs": 3, 00:16:52.356 "num_base_bdevs_discovered": 2, 00:16:52.356 "num_base_bdevs_operational": 2, 00:16:52.356 "base_bdevs_list": [ 00:16:52.356 { 00:16:52.356 "name": null, 00:16:52.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.356 "is_configured": false, 00:16:52.356 "data_offset": 0, 00:16:52.356 "data_size": 65536 00:16:52.356 }, 00:16:52.356 { 00:16:52.356 "name": "BaseBdev2", 00:16:52.356 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:52.356 "is_configured": true, 00:16:52.356 "data_offset": 0, 00:16:52.356 "data_size": 65536 00:16:52.356 }, 00:16:52.356 { 00:16:52.356 "name": "BaseBdev3", 00:16:52.356 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:52.356 "is_configured": true, 00:16:52.356 "data_offset": 0, 00:16:52.356 "data_size": 65536 00:16:52.356 } 00:16:52.356 ] 00:16:52.356 }' 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.356 [2024-11-05 03:28:05.962347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.356 [2024-11-05 03:28:05.976741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.356 03:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:52.356 [2024-11-05 03:28:05.983655] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:53.733 03:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.733 03:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.733 03:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.733 03:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.734 03:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.734 03:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.734 03:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.734 03:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.734 03:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.734 "name": "raid_bdev1", 00:16:53.734 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:53.734 "strip_size_kb": 64, 00:16:53.734 "state": "online", 00:16:53.734 "raid_level": "raid5f", 00:16:53.734 "superblock": false, 00:16:53.734 "num_base_bdevs": 3, 00:16:53.734 "num_base_bdevs_discovered": 3, 00:16:53.734 "num_base_bdevs_operational": 3, 00:16:53.734 "process": { 00:16:53.734 "type": "rebuild", 00:16:53.734 "target": "spare", 00:16:53.734 "progress": { 00:16:53.734 "blocks": 18432, 00:16:53.734 "percent": 14 00:16:53.734 } 00:16:53.734 }, 00:16:53.734 "base_bdevs_list": [ 00:16:53.734 { 00:16:53.734 "name": "spare", 00:16:53.734 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:53.734 "is_configured": true, 00:16:53.734 "data_offset": 0, 00:16:53.734 "data_size": 65536 00:16:53.734 }, 00:16:53.734 { 00:16:53.734 "name": "BaseBdev2", 00:16:53.734 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:53.734 "is_configured": true, 00:16:53.734 "data_offset": 0, 00:16:53.734 "data_size": 65536 00:16:53.734 }, 00:16:53.734 { 00:16:53.734 "name": "BaseBdev3", 00:16:53.734 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:53.734 "is_configured": true, 00:16:53.734 "data_offset": 0, 00:16:53.734 "data_size": 65536 00:16:53.734 } 00:16:53.734 ] 00:16:53.734 }' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=589 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.734 "name": "raid_bdev1", 00:16:53.734 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:53.734 "strip_size_kb": 64, 00:16:53.734 "state": "online", 00:16:53.734 "raid_level": "raid5f", 00:16:53.734 "superblock": false, 00:16:53.734 "num_base_bdevs": 3, 00:16:53.734 "num_base_bdevs_discovered": 3, 00:16:53.734 "num_base_bdevs_operational": 3, 00:16:53.734 "process": { 00:16:53.734 "type": "rebuild", 00:16:53.734 "target": "spare", 00:16:53.734 "progress": { 00:16:53.734 "blocks": 22528, 00:16:53.734 "percent": 17 00:16:53.734 } 00:16:53.734 }, 00:16:53.734 "base_bdevs_list": [ 00:16:53.734 { 00:16:53.734 "name": "spare", 00:16:53.734 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:53.734 "is_configured": true, 00:16:53.734 "data_offset": 0, 00:16:53.734 "data_size": 65536 00:16:53.734 }, 00:16:53.734 { 00:16:53.734 "name": "BaseBdev2", 00:16:53.734 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:53.734 "is_configured": true, 00:16:53.734 "data_offset": 0, 00:16:53.734 "data_size": 65536 00:16:53.734 }, 00:16:53.734 { 00:16:53.734 "name": "BaseBdev3", 00:16:53.734 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:53.734 "is_configured": true, 00:16:53.734 "data_offset": 0, 00:16:53.734 "data_size": 65536 00:16:53.734 } 00:16:53.734 ] 00:16:53.734 }' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.734 03:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.670 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.670 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.670 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.670 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.670 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.670 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.934 "name": "raid_bdev1", 00:16:54.934 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:54.934 "strip_size_kb": 64, 00:16:54.934 "state": "online", 00:16:54.934 "raid_level": "raid5f", 00:16:54.934 "superblock": false, 00:16:54.934 "num_base_bdevs": 3, 00:16:54.934 "num_base_bdevs_discovered": 3, 00:16:54.934 "num_base_bdevs_operational": 3, 00:16:54.934 "process": { 00:16:54.934 "type": "rebuild", 00:16:54.934 "target": "spare", 00:16:54.934 "progress": { 00:16:54.934 "blocks": 47104, 00:16:54.934 "percent": 35 00:16:54.934 } 00:16:54.934 }, 00:16:54.934 "base_bdevs_list": [ 00:16:54.934 { 00:16:54.934 "name": "spare", 00:16:54.934 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:54.934 "is_configured": true, 00:16:54.934 "data_offset": 0, 00:16:54.934 "data_size": 65536 00:16:54.934 }, 00:16:54.934 { 00:16:54.934 "name": "BaseBdev2", 00:16:54.934 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:54.934 "is_configured": true, 00:16:54.934 "data_offset": 0, 00:16:54.934 "data_size": 65536 00:16:54.934 }, 00:16:54.934 { 00:16:54.934 "name": "BaseBdev3", 00:16:54.934 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:54.934 "is_configured": true, 00:16:54.934 "data_offset": 0, 00:16:54.934 "data_size": 65536 00:16:54.934 } 00:16:54.934 ] 00:16:54.934 }' 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.934 03:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.868 03:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.126 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.126 "name": "raid_bdev1", 00:16:56.126 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:56.126 "strip_size_kb": 64, 00:16:56.126 "state": "online", 00:16:56.126 "raid_level": "raid5f", 00:16:56.126 "superblock": false, 00:16:56.126 "num_base_bdevs": 3, 00:16:56.126 "num_base_bdevs_discovered": 3, 00:16:56.126 "num_base_bdevs_operational": 3, 00:16:56.126 "process": { 00:16:56.126 "type": "rebuild", 00:16:56.126 "target": "spare", 00:16:56.126 "progress": { 00:16:56.126 "blocks": 69632, 00:16:56.126 "percent": 53 00:16:56.127 } 00:16:56.127 }, 00:16:56.127 "base_bdevs_list": [ 00:16:56.127 { 00:16:56.127 "name": "spare", 00:16:56.127 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:56.127 "is_configured": true, 00:16:56.127 "data_offset": 0, 00:16:56.127 "data_size": 65536 00:16:56.127 }, 00:16:56.127 { 00:16:56.127 "name": "BaseBdev2", 00:16:56.127 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:56.127 "is_configured": true, 00:16:56.127 "data_offset": 0, 00:16:56.127 "data_size": 65536 00:16:56.127 }, 00:16:56.127 { 00:16:56.127 "name": "BaseBdev3", 00:16:56.127 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:56.127 "is_configured": true, 00:16:56.127 "data_offset": 0, 00:16:56.127 "data_size": 65536 00:16:56.127 } 00:16:56.127 ] 00:16:56.127 }' 00:16:56.127 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.127 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.127 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.127 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.127 03:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.091 "name": "raid_bdev1", 00:16:57.091 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:57.091 "strip_size_kb": 64, 00:16:57.091 "state": "online", 00:16:57.091 "raid_level": "raid5f", 00:16:57.091 "superblock": false, 00:16:57.091 "num_base_bdevs": 3, 00:16:57.091 "num_base_bdevs_discovered": 3, 00:16:57.091 "num_base_bdevs_operational": 3, 00:16:57.091 "process": { 00:16:57.091 "type": "rebuild", 00:16:57.091 "target": "spare", 00:16:57.091 "progress": { 00:16:57.091 "blocks": 94208, 00:16:57.091 "percent": 71 00:16:57.091 } 00:16:57.091 }, 00:16:57.091 "base_bdevs_list": [ 00:16:57.091 { 00:16:57.091 "name": "spare", 00:16:57.091 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:57.091 "is_configured": true, 00:16:57.091 "data_offset": 0, 00:16:57.091 "data_size": 65536 00:16:57.091 }, 00:16:57.091 { 00:16:57.091 "name": "BaseBdev2", 00:16:57.091 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:57.091 "is_configured": true, 00:16:57.091 "data_offset": 0, 00:16:57.091 "data_size": 65536 00:16:57.091 }, 00:16:57.091 { 00:16:57.091 "name": "BaseBdev3", 00:16:57.091 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:57.091 "is_configured": true, 00:16:57.091 "data_offset": 0, 00:16:57.091 "data_size": 65536 00:16:57.091 } 00:16:57.091 ] 00:16:57.091 }' 00:16:57.091 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.350 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.350 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.350 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.350 03:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.287 "name": "raid_bdev1", 00:16:58.287 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:58.287 "strip_size_kb": 64, 00:16:58.287 "state": "online", 00:16:58.287 "raid_level": "raid5f", 00:16:58.287 "superblock": false, 00:16:58.287 "num_base_bdevs": 3, 00:16:58.287 "num_base_bdevs_discovered": 3, 00:16:58.287 "num_base_bdevs_operational": 3, 00:16:58.287 "process": { 00:16:58.287 "type": "rebuild", 00:16:58.287 "target": "spare", 00:16:58.287 "progress": { 00:16:58.287 "blocks": 116736, 00:16:58.287 "percent": 89 00:16:58.287 } 00:16:58.287 }, 00:16:58.287 "base_bdevs_list": [ 00:16:58.287 { 00:16:58.287 "name": "spare", 00:16:58.287 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:58.287 "is_configured": true, 00:16:58.287 "data_offset": 0, 00:16:58.287 "data_size": 65536 00:16:58.287 }, 00:16:58.287 { 00:16:58.287 "name": "BaseBdev2", 00:16:58.287 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:58.287 "is_configured": true, 00:16:58.287 "data_offset": 0, 00:16:58.287 "data_size": 65536 00:16:58.287 }, 00:16:58.287 { 00:16:58.287 "name": "BaseBdev3", 00:16:58.287 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:58.287 "is_configured": true, 00:16:58.287 "data_offset": 0, 00:16:58.287 "data_size": 65536 00:16:58.287 } 00:16:58.287 ] 00:16:58.287 }' 00:16:58.287 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.546 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.546 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.546 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.546 03:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.113 [2024-11-05 03:28:12.457815] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:59.113 [2024-11-05 03:28:12.457952] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:59.113 [2024-11-05 03:28:12.458026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.372 03:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.632 "name": "raid_bdev1", 00:16:59.632 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:59.632 "strip_size_kb": 64, 00:16:59.632 "state": "online", 00:16:59.632 "raid_level": "raid5f", 00:16:59.632 "superblock": false, 00:16:59.632 "num_base_bdevs": 3, 00:16:59.632 "num_base_bdevs_discovered": 3, 00:16:59.632 "num_base_bdevs_operational": 3, 00:16:59.632 "base_bdevs_list": [ 00:16:59.632 { 00:16:59.632 "name": "spare", 00:16:59.632 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:59.632 "is_configured": true, 00:16:59.632 "data_offset": 0, 00:16:59.632 "data_size": 65536 00:16:59.632 }, 00:16:59.632 { 00:16:59.632 "name": "BaseBdev2", 00:16:59.632 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:59.632 "is_configured": true, 00:16:59.632 "data_offset": 0, 00:16:59.632 "data_size": 65536 00:16:59.632 }, 00:16:59.632 { 00:16:59.632 "name": "BaseBdev3", 00:16:59.632 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:59.632 "is_configured": true, 00:16:59.632 "data_offset": 0, 00:16:59.632 "data_size": 65536 00:16:59.632 } 00:16:59.632 ] 00:16:59.632 }' 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.632 "name": "raid_bdev1", 00:16:59.632 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:59.632 "strip_size_kb": 64, 00:16:59.632 "state": "online", 00:16:59.632 "raid_level": "raid5f", 00:16:59.632 "superblock": false, 00:16:59.632 "num_base_bdevs": 3, 00:16:59.632 "num_base_bdevs_discovered": 3, 00:16:59.632 "num_base_bdevs_operational": 3, 00:16:59.632 "base_bdevs_list": [ 00:16:59.632 { 00:16:59.632 "name": "spare", 00:16:59.632 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:59.632 "is_configured": true, 00:16:59.632 "data_offset": 0, 00:16:59.632 "data_size": 65536 00:16:59.632 }, 00:16:59.632 { 00:16:59.632 "name": "BaseBdev2", 00:16:59.632 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:59.632 "is_configured": true, 00:16:59.632 "data_offset": 0, 00:16:59.632 "data_size": 65536 00:16:59.632 }, 00:16:59.632 { 00:16:59.632 "name": "BaseBdev3", 00:16:59.632 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:59.632 "is_configured": true, 00:16:59.632 "data_offset": 0, 00:16:59.632 "data_size": 65536 00:16:59.632 } 00:16:59.632 ] 00:16:59.632 }' 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.632 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.892 "name": "raid_bdev1", 00:16:59.892 "uuid": "3892cd72-6ac1-4d62-9475-e5e2c4941d48", 00:16:59.892 "strip_size_kb": 64, 00:16:59.892 "state": "online", 00:16:59.892 "raid_level": "raid5f", 00:16:59.892 "superblock": false, 00:16:59.892 "num_base_bdevs": 3, 00:16:59.892 "num_base_bdevs_discovered": 3, 00:16:59.892 "num_base_bdevs_operational": 3, 00:16:59.892 "base_bdevs_list": [ 00:16:59.892 { 00:16:59.892 "name": "spare", 00:16:59.892 "uuid": "760435d3-17e1-59e3-b953-e323e59ed84a", 00:16:59.892 "is_configured": true, 00:16:59.892 "data_offset": 0, 00:16:59.892 "data_size": 65536 00:16:59.892 }, 00:16:59.892 { 00:16:59.892 "name": "BaseBdev2", 00:16:59.892 "uuid": "1ad3d110-fd75-5ca0-9776-479a1f15cc2c", 00:16:59.892 "is_configured": true, 00:16:59.892 "data_offset": 0, 00:16:59.892 "data_size": 65536 00:16:59.892 }, 00:16:59.892 { 00:16:59.892 "name": "BaseBdev3", 00:16:59.892 "uuid": "f9ce67e8-aaad-5aa8-a908-f071b124d89f", 00:16:59.892 "is_configured": true, 00:16:59.892 "data_offset": 0, 00:16:59.892 "data_size": 65536 00:16:59.892 } 00:16:59.892 ] 00:16:59.892 }' 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.892 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.459 [2024-11-05 03:28:13.840085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.459 [2024-11-05 03:28:13.840119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.459 [2024-11-05 03:28:13.840233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.459 [2024-11-05 03:28:13.840399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.459 [2024-11-05 03:28:13.840434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:00.459 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:00.460 03:28:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:00.718 /dev/nbd0 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.718 1+0 records in 00:17:00.718 1+0 records out 00:17:00.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338805 s, 12.1 MB/s 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:00.718 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:00.719 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:00.977 /dev/nbd1 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.977 1+0 records in 00:17:00.977 1+0 records out 00:17:00.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379942 s, 10.8 MB/s 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:00.977 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:01.248 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:01.248 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.248 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.248 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.248 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:01.248 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.248 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.544 03:28:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81686 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81686 ']' 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81686 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81686 00:17:01.803 killing process with pid 81686 00:17:01.803 Received shutdown signal, test time was about 60.000000 seconds 00:17:01.803 00:17:01.803 Latency(us) 00:17:01.803 [2024-11-05T03:28:15.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.803 [2024-11-05T03:28:15.442Z] =================================================================================================================== 00:17:01.803 [2024-11-05T03:28:15.442Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81686' 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81686 00:17:01.803 03:28:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81686 00:17:01.803 [2024-11-05 03:28:15.304363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.370 [2024-11-05 03:28:15.761768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:03.304 00:17:03.304 real 0m16.448s 00:17:03.304 user 0m21.065s 00:17:03.304 sys 0m2.089s 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.304 ************************************ 00:17:03.304 END TEST raid5f_rebuild_test 00:17:03.304 ************************************ 00:17:03.304 03:28:16 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:03.304 03:28:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:03.304 03:28:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:03.304 03:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.304 ************************************ 00:17:03.304 START TEST raid5f_rebuild_test_sb 00:17:03.304 ************************************ 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:03.304 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82143 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82143 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82143 ']' 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:03.305 03:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.305 [2024-11-05 03:28:16.868887] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:17:03.305 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:03.305 Zero copy mechanism will not be used. 00:17:03.305 [2024-11-05 03:28:16.869714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82143 ] 00:17:03.563 [2024-11-05 03:28:17.056653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.563 [2024-11-05 03:28:17.173179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.821 [2024-11-05 03:28:17.361254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.821 [2024-11-05 03:28:17.361290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 BaseBdev1_malloc 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 [2024-11-05 03:28:17.916246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:04.389 [2024-11-05 03:28:17.916357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.389 [2024-11-05 03:28:17.916393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:04.389 [2024-11-05 03:28:17.916412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.389 [2024-11-05 03:28:17.919297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.389 [2024-11-05 03:28:17.919366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:04.389 BaseBdev1 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 BaseBdev2_malloc 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 [2024-11-05 03:28:17.968802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:04.389 [2024-11-05 03:28:17.968910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.389 [2024-11-05 03:28:17.968938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:04.389 [2024-11-05 03:28:17.968959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.389 [2024-11-05 03:28:17.971640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.389 [2024-11-05 03:28:17.971721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:04.389 BaseBdev2 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.389 03:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 BaseBdev3_malloc 00:17:04.389 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.389 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:04.389 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.389 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.648 [2024-11-05 03:28:18.026813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:04.648 [2024-11-05 03:28:18.026920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.648 [2024-11-05 03:28:18.026953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:04.648 [2024-11-05 03:28:18.026973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.648 [2024-11-05 03:28:18.029769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.648 [2024-11-05 03:28:18.029954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:04.648 BaseBdev3 00:17:04.648 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.648 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:04.648 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.648 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.649 spare_malloc 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.649 spare_delay 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.649 [2024-11-05 03:28:18.084943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:04.649 [2024-11-05 03:28:18.085027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.649 [2024-11-05 03:28:18.085064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:04.649 [2024-11-05 03:28:18.085082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.649 [2024-11-05 03:28:18.087881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.649 [2024-11-05 03:28:18.087948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:04.649 spare 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.649 [2024-11-05 03:28:18.093053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.649 [2024-11-05 03:28:18.095760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.649 [2024-11-05 03:28:18.095844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:04.649 [2024-11-05 03:28:18.096067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:04.649 [2024-11-05 03:28:18.096087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:04.649 [2024-11-05 03:28:18.096582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:04.649 [2024-11-05 03:28:18.101894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:04.649 [2024-11-05 03:28:18.102082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:04.649 [2024-11-05 03:28:18.102492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.649 "name": "raid_bdev1", 00:17:04.649 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:04.649 "strip_size_kb": 64, 00:17:04.649 "state": "online", 00:17:04.649 "raid_level": "raid5f", 00:17:04.649 "superblock": true, 00:17:04.649 "num_base_bdevs": 3, 00:17:04.649 "num_base_bdevs_discovered": 3, 00:17:04.649 "num_base_bdevs_operational": 3, 00:17:04.649 "base_bdevs_list": [ 00:17:04.649 { 00:17:04.649 "name": "BaseBdev1", 00:17:04.649 "uuid": "dda0a09b-db0b-589d-9c96-9fff77ee07b0", 00:17:04.649 "is_configured": true, 00:17:04.649 "data_offset": 2048, 00:17:04.649 "data_size": 63488 00:17:04.649 }, 00:17:04.649 { 00:17:04.649 "name": "BaseBdev2", 00:17:04.649 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:04.649 "is_configured": true, 00:17:04.649 "data_offset": 2048, 00:17:04.649 "data_size": 63488 00:17:04.649 }, 00:17:04.649 { 00:17:04.649 "name": "BaseBdev3", 00:17:04.649 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:04.649 "is_configured": true, 00:17:04.649 "data_offset": 2048, 00:17:04.649 "data_size": 63488 00:17:04.649 } 00:17:04.649 ] 00:17:04.649 }' 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.649 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.217 [2024-11-05 03:28:18.608707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.217 03:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:05.476 [2024-11-05 03:28:18.992685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:05.476 /dev/nbd0 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:05.476 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.477 1+0 records in 00:17:05.477 1+0 records out 00:17:05.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309559 s, 13.2 MB/s 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:05.477 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:06.045 496+0 records in 00:17:06.045 496+0 records out 00:17:06.045 65011712 bytes (65 MB, 62 MiB) copied, 0.490822 s, 132 MB/s 00:17:06.045 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:06.045 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.045 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:06.045 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:06.045 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:06.045 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:06.045 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:06.303 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:06.304 [2024-11-05 03:28:19.799869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.304 [2024-11-05 03:28:19.817600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.304 "name": "raid_bdev1", 00:17:06.304 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:06.304 "strip_size_kb": 64, 00:17:06.304 "state": "online", 00:17:06.304 "raid_level": "raid5f", 00:17:06.304 "superblock": true, 00:17:06.304 "num_base_bdevs": 3, 00:17:06.304 "num_base_bdevs_discovered": 2, 00:17:06.304 "num_base_bdevs_operational": 2, 00:17:06.304 "base_bdevs_list": [ 00:17:06.304 { 00:17:06.304 "name": null, 00:17:06.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.304 "is_configured": false, 00:17:06.304 "data_offset": 0, 00:17:06.304 "data_size": 63488 00:17:06.304 }, 00:17:06.304 { 00:17:06.304 "name": "BaseBdev2", 00:17:06.304 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:06.304 "is_configured": true, 00:17:06.304 "data_offset": 2048, 00:17:06.304 "data_size": 63488 00:17:06.304 }, 00:17:06.304 { 00:17:06.304 "name": "BaseBdev3", 00:17:06.304 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:06.304 "is_configured": true, 00:17:06.304 "data_offset": 2048, 00:17:06.304 "data_size": 63488 00:17:06.304 } 00:17:06.304 ] 00:17:06.304 }' 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.304 03:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.871 03:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.872 03:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.872 03:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.872 [2024-11-05 03:28:20.341749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.872 [2024-11-05 03:28:20.357153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:06.872 03:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.872 03:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.872 [2024-11-05 03:28:20.364545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.807 "name": "raid_bdev1", 00:17:07.807 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:07.807 "strip_size_kb": 64, 00:17:07.807 "state": "online", 00:17:07.807 "raid_level": "raid5f", 00:17:07.807 "superblock": true, 00:17:07.807 "num_base_bdevs": 3, 00:17:07.807 "num_base_bdevs_discovered": 3, 00:17:07.807 "num_base_bdevs_operational": 3, 00:17:07.807 "process": { 00:17:07.807 "type": "rebuild", 00:17:07.807 "target": "spare", 00:17:07.807 "progress": { 00:17:07.807 "blocks": 18432, 00:17:07.807 "percent": 14 00:17:07.807 } 00:17:07.807 }, 00:17:07.807 "base_bdevs_list": [ 00:17:07.807 { 00:17:07.807 "name": "spare", 00:17:07.807 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:07.807 "is_configured": true, 00:17:07.807 "data_offset": 2048, 00:17:07.807 "data_size": 63488 00:17:07.807 }, 00:17:07.807 { 00:17:07.807 "name": "BaseBdev2", 00:17:07.807 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:07.807 "is_configured": true, 00:17:07.807 "data_offset": 2048, 00:17:07.807 "data_size": 63488 00:17:07.807 }, 00:17:07.807 { 00:17:07.807 "name": "BaseBdev3", 00:17:07.807 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:07.807 "is_configured": true, 00:17:07.807 "data_offset": 2048, 00:17:07.807 "data_size": 63488 00:17:07.807 } 00:17:07.807 ] 00:17:07.807 }' 00:17:07.807 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.066 [2024-11-05 03:28:21.521753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.066 [2024-11-05 03:28:21.577844] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.066 [2024-11-05 03:28:21.577938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.066 [2024-11-05 03:28:21.577971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.066 [2024-11-05 03:28:21.577983] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.066 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.066 "name": "raid_bdev1", 00:17:08.066 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:08.066 "strip_size_kb": 64, 00:17:08.066 "state": "online", 00:17:08.066 "raid_level": "raid5f", 00:17:08.066 "superblock": true, 00:17:08.066 "num_base_bdevs": 3, 00:17:08.066 "num_base_bdevs_discovered": 2, 00:17:08.066 "num_base_bdevs_operational": 2, 00:17:08.066 "base_bdevs_list": [ 00:17:08.066 { 00:17:08.066 "name": null, 00:17:08.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.066 "is_configured": false, 00:17:08.066 "data_offset": 0, 00:17:08.066 "data_size": 63488 00:17:08.066 }, 00:17:08.066 { 00:17:08.066 "name": "BaseBdev2", 00:17:08.066 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:08.066 "is_configured": true, 00:17:08.066 "data_offset": 2048, 00:17:08.066 "data_size": 63488 00:17:08.066 }, 00:17:08.066 { 00:17:08.066 "name": "BaseBdev3", 00:17:08.066 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:08.066 "is_configured": true, 00:17:08.066 "data_offset": 2048, 00:17:08.066 "data_size": 63488 00:17:08.066 } 00:17:08.067 ] 00:17:08.067 }' 00:17:08.067 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.067 03:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.633 "name": "raid_bdev1", 00:17:08.633 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:08.633 "strip_size_kb": 64, 00:17:08.633 "state": "online", 00:17:08.633 "raid_level": "raid5f", 00:17:08.633 "superblock": true, 00:17:08.633 "num_base_bdevs": 3, 00:17:08.633 "num_base_bdevs_discovered": 2, 00:17:08.633 "num_base_bdevs_operational": 2, 00:17:08.633 "base_bdevs_list": [ 00:17:08.633 { 00:17:08.633 "name": null, 00:17:08.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.633 "is_configured": false, 00:17:08.633 "data_offset": 0, 00:17:08.633 "data_size": 63488 00:17:08.633 }, 00:17:08.633 { 00:17:08.633 "name": "BaseBdev2", 00:17:08.633 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:08.633 "is_configured": true, 00:17:08.633 "data_offset": 2048, 00:17:08.633 "data_size": 63488 00:17:08.633 }, 00:17:08.633 { 00:17:08.633 "name": "BaseBdev3", 00:17:08.633 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:08.633 "is_configured": true, 00:17:08.633 "data_offset": 2048, 00:17:08.633 "data_size": 63488 00:17:08.633 } 00:17:08.633 ] 00:17:08.633 }' 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.633 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.892 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.892 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.892 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.892 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.892 [2024-11-05 03:28:22.308799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.892 [2024-11-05 03:28:22.323565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:08.892 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.892 03:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:08.892 [2024-11-05 03:28:22.331001] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.827 "name": "raid_bdev1", 00:17:09.827 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:09.827 "strip_size_kb": 64, 00:17:09.827 "state": "online", 00:17:09.827 "raid_level": "raid5f", 00:17:09.827 "superblock": true, 00:17:09.827 "num_base_bdevs": 3, 00:17:09.827 "num_base_bdevs_discovered": 3, 00:17:09.827 "num_base_bdevs_operational": 3, 00:17:09.827 "process": { 00:17:09.827 "type": "rebuild", 00:17:09.827 "target": "spare", 00:17:09.827 "progress": { 00:17:09.827 "blocks": 18432, 00:17:09.827 "percent": 14 00:17:09.827 } 00:17:09.827 }, 00:17:09.827 "base_bdevs_list": [ 00:17:09.827 { 00:17:09.827 "name": "spare", 00:17:09.827 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:09.827 "is_configured": true, 00:17:09.827 "data_offset": 2048, 00:17:09.827 "data_size": 63488 00:17:09.827 }, 00:17:09.827 { 00:17:09.827 "name": "BaseBdev2", 00:17:09.827 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:09.827 "is_configured": true, 00:17:09.827 "data_offset": 2048, 00:17:09.827 "data_size": 63488 00:17:09.827 }, 00:17:09.827 { 00:17:09.827 "name": "BaseBdev3", 00:17:09.827 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:09.827 "is_configured": true, 00:17:09.827 "data_offset": 2048, 00:17:09.827 "data_size": 63488 00:17:09.827 } 00:17:09.827 ] 00:17:09.827 }' 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.827 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:10.086 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=605 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.086 "name": "raid_bdev1", 00:17:10.086 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:10.086 "strip_size_kb": 64, 00:17:10.086 "state": "online", 00:17:10.086 "raid_level": "raid5f", 00:17:10.086 "superblock": true, 00:17:10.086 "num_base_bdevs": 3, 00:17:10.086 "num_base_bdevs_discovered": 3, 00:17:10.086 "num_base_bdevs_operational": 3, 00:17:10.086 "process": { 00:17:10.086 "type": "rebuild", 00:17:10.086 "target": "spare", 00:17:10.086 "progress": { 00:17:10.086 "blocks": 22528, 00:17:10.086 "percent": 17 00:17:10.086 } 00:17:10.086 }, 00:17:10.086 "base_bdevs_list": [ 00:17:10.086 { 00:17:10.086 "name": "spare", 00:17:10.086 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:10.086 "is_configured": true, 00:17:10.086 "data_offset": 2048, 00:17:10.086 "data_size": 63488 00:17:10.086 }, 00:17:10.086 { 00:17:10.086 "name": "BaseBdev2", 00:17:10.086 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:10.086 "is_configured": true, 00:17:10.086 "data_offset": 2048, 00:17:10.086 "data_size": 63488 00:17:10.086 }, 00:17:10.086 { 00:17:10.086 "name": "BaseBdev3", 00:17:10.086 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:10.086 "is_configured": true, 00:17:10.086 "data_offset": 2048, 00:17:10.086 "data_size": 63488 00:17:10.086 } 00:17:10.086 ] 00:17:10.086 }' 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.086 03:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.022 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.280 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.280 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.280 "name": "raid_bdev1", 00:17:11.280 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:11.280 "strip_size_kb": 64, 00:17:11.280 "state": "online", 00:17:11.280 "raid_level": "raid5f", 00:17:11.280 "superblock": true, 00:17:11.280 "num_base_bdevs": 3, 00:17:11.280 "num_base_bdevs_discovered": 3, 00:17:11.280 "num_base_bdevs_operational": 3, 00:17:11.280 "process": { 00:17:11.280 "type": "rebuild", 00:17:11.280 "target": "spare", 00:17:11.280 "progress": { 00:17:11.280 "blocks": 45056, 00:17:11.280 "percent": 35 00:17:11.280 } 00:17:11.280 }, 00:17:11.280 "base_bdevs_list": [ 00:17:11.280 { 00:17:11.280 "name": "spare", 00:17:11.280 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:11.280 "is_configured": true, 00:17:11.280 "data_offset": 2048, 00:17:11.280 "data_size": 63488 00:17:11.280 }, 00:17:11.280 { 00:17:11.280 "name": "BaseBdev2", 00:17:11.280 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:11.280 "is_configured": true, 00:17:11.280 "data_offset": 2048, 00:17:11.280 "data_size": 63488 00:17:11.280 }, 00:17:11.280 { 00:17:11.280 "name": "BaseBdev3", 00:17:11.280 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:11.280 "is_configured": true, 00:17:11.280 "data_offset": 2048, 00:17:11.280 "data_size": 63488 00:17:11.280 } 00:17:11.280 ] 00:17:11.280 }' 00:17:11.280 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.280 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.280 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.280 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.280 03:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.216 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.513 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.513 "name": "raid_bdev1", 00:17:12.513 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:12.513 "strip_size_kb": 64, 00:17:12.513 "state": "online", 00:17:12.513 "raid_level": "raid5f", 00:17:12.513 "superblock": true, 00:17:12.513 "num_base_bdevs": 3, 00:17:12.513 "num_base_bdevs_discovered": 3, 00:17:12.513 "num_base_bdevs_operational": 3, 00:17:12.513 "process": { 00:17:12.513 "type": "rebuild", 00:17:12.513 "target": "spare", 00:17:12.513 "progress": { 00:17:12.513 "blocks": 69632, 00:17:12.513 "percent": 54 00:17:12.513 } 00:17:12.513 }, 00:17:12.513 "base_bdevs_list": [ 00:17:12.513 { 00:17:12.513 "name": "spare", 00:17:12.513 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:12.513 "is_configured": true, 00:17:12.513 "data_offset": 2048, 00:17:12.513 "data_size": 63488 00:17:12.513 }, 00:17:12.513 { 00:17:12.513 "name": "BaseBdev2", 00:17:12.513 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:12.513 "is_configured": true, 00:17:12.513 "data_offset": 2048, 00:17:12.513 "data_size": 63488 00:17:12.513 }, 00:17:12.513 { 00:17:12.513 "name": "BaseBdev3", 00:17:12.513 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:12.513 "is_configured": true, 00:17:12.513 "data_offset": 2048, 00:17:12.513 "data_size": 63488 00:17:12.513 } 00:17:12.513 ] 00:17:12.513 }' 00:17:12.513 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.513 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.513 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.513 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.513 03:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.449 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.450 03:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.450 03:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.450 "name": "raid_bdev1", 00:17:13.450 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:13.450 "strip_size_kb": 64, 00:17:13.450 "state": "online", 00:17:13.450 "raid_level": "raid5f", 00:17:13.450 "superblock": true, 00:17:13.450 "num_base_bdevs": 3, 00:17:13.450 "num_base_bdevs_discovered": 3, 00:17:13.450 "num_base_bdevs_operational": 3, 00:17:13.450 "process": { 00:17:13.450 "type": "rebuild", 00:17:13.450 "target": "spare", 00:17:13.450 "progress": { 00:17:13.450 "blocks": 92160, 00:17:13.450 "percent": 72 00:17:13.450 } 00:17:13.450 }, 00:17:13.450 "base_bdevs_list": [ 00:17:13.450 { 00:17:13.450 "name": "spare", 00:17:13.450 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:13.450 "is_configured": true, 00:17:13.450 "data_offset": 2048, 00:17:13.450 "data_size": 63488 00:17:13.450 }, 00:17:13.450 { 00:17:13.450 "name": "BaseBdev2", 00:17:13.450 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:13.450 "is_configured": true, 00:17:13.450 "data_offset": 2048, 00:17:13.450 "data_size": 63488 00:17:13.450 }, 00:17:13.450 { 00:17:13.450 "name": "BaseBdev3", 00:17:13.450 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:13.450 "is_configured": true, 00:17:13.450 "data_offset": 2048, 00:17:13.450 "data_size": 63488 00:17:13.450 } 00:17:13.450 ] 00:17:13.450 }' 00:17:13.450 03:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.450 03:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.450 03:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.709 03:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.709 03:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.645 "name": "raid_bdev1", 00:17:14.645 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:14.645 "strip_size_kb": 64, 00:17:14.645 "state": "online", 00:17:14.645 "raid_level": "raid5f", 00:17:14.645 "superblock": true, 00:17:14.645 "num_base_bdevs": 3, 00:17:14.645 "num_base_bdevs_discovered": 3, 00:17:14.645 "num_base_bdevs_operational": 3, 00:17:14.645 "process": { 00:17:14.645 "type": "rebuild", 00:17:14.645 "target": "spare", 00:17:14.645 "progress": { 00:17:14.645 "blocks": 116736, 00:17:14.645 "percent": 91 00:17:14.645 } 00:17:14.645 }, 00:17:14.645 "base_bdevs_list": [ 00:17:14.645 { 00:17:14.645 "name": "spare", 00:17:14.645 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:14.645 "is_configured": true, 00:17:14.645 "data_offset": 2048, 00:17:14.645 "data_size": 63488 00:17:14.645 }, 00:17:14.645 { 00:17:14.645 "name": "BaseBdev2", 00:17:14.645 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:14.645 "is_configured": true, 00:17:14.645 "data_offset": 2048, 00:17:14.645 "data_size": 63488 00:17:14.645 }, 00:17:14.645 { 00:17:14.645 "name": "BaseBdev3", 00:17:14.645 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:14.645 "is_configured": true, 00:17:14.645 "data_offset": 2048, 00:17:14.645 "data_size": 63488 00:17:14.645 } 00:17:14.645 ] 00:17:14.645 }' 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.645 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.905 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.905 03:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.163 [2024-11-05 03:28:28.606829] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:15.163 [2024-11-05 03:28:28.606939] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:15.163 [2024-11-05 03:28:28.607094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.731 "name": "raid_bdev1", 00:17:15.731 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:15.731 "strip_size_kb": 64, 00:17:15.731 "state": "online", 00:17:15.731 "raid_level": "raid5f", 00:17:15.731 "superblock": true, 00:17:15.731 "num_base_bdevs": 3, 00:17:15.731 "num_base_bdevs_discovered": 3, 00:17:15.731 "num_base_bdevs_operational": 3, 00:17:15.731 "base_bdevs_list": [ 00:17:15.731 { 00:17:15.731 "name": "spare", 00:17:15.731 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:15.731 "is_configured": true, 00:17:15.731 "data_offset": 2048, 00:17:15.731 "data_size": 63488 00:17:15.731 }, 00:17:15.731 { 00:17:15.731 "name": "BaseBdev2", 00:17:15.731 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:15.731 "is_configured": true, 00:17:15.731 "data_offset": 2048, 00:17:15.731 "data_size": 63488 00:17:15.731 }, 00:17:15.731 { 00:17:15.731 "name": "BaseBdev3", 00:17:15.731 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:15.731 "is_configured": true, 00:17:15.731 "data_offset": 2048, 00:17:15.731 "data_size": 63488 00:17:15.731 } 00:17:15.731 ] 00:17:15.731 }' 00:17:15.731 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.989 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.990 "name": "raid_bdev1", 00:17:15.990 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:15.990 "strip_size_kb": 64, 00:17:15.990 "state": "online", 00:17:15.990 "raid_level": "raid5f", 00:17:15.990 "superblock": true, 00:17:15.990 "num_base_bdevs": 3, 00:17:15.990 "num_base_bdevs_discovered": 3, 00:17:15.990 "num_base_bdevs_operational": 3, 00:17:15.990 "base_bdevs_list": [ 00:17:15.990 { 00:17:15.990 "name": "spare", 00:17:15.990 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:15.990 "is_configured": true, 00:17:15.990 "data_offset": 2048, 00:17:15.990 "data_size": 63488 00:17:15.990 }, 00:17:15.990 { 00:17:15.990 "name": "BaseBdev2", 00:17:15.990 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:15.990 "is_configured": true, 00:17:15.990 "data_offset": 2048, 00:17:15.990 "data_size": 63488 00:17:15.990 }, 00:17:15.990 { 00:17:15.990 "name": "BaseBdev3", 00:17:15.990 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:15.990 "is_configured": true, 00:17:15.990 "data_offset": 2048, 00:17:15.990 "data_size": 63488 00:17:15.990 } 00:17:15.990 ] 00:17:15.990 }' 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.990 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.248 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.248 "name": "raid_bdev1", 00:17:16.248 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:16.248 "strip_size_kb": 64, 00:17:16.248 "state": "online", 00:17:16.248 "raid_level": "raid5f", 00:17:16.248 "superblock": true, 00:17:16.248 "num_base_bdevs": 3, 00:17:16.248 "num_base_bdevs_discovered": 3, 00:17:16.248 "num_base_bdevs_operational": 3, 00:17:16.248 "base_bdevs_list": [ 00:17:16.248 { 00:17:16.248 "name": "spare", 00:17:16.248 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:16.248 "is_configured": true, 00:17:16.248 "data_offset": 2048, 00:17:16.248 "data_size": 63488 00:17:16.248 }, 00:17:16.248 { 00:17:16.248 "name": "BaseBdev2", 00:17:16.248 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:16.248 "is_configured": true, 00:17:16.248 "data_offset": 2048, 00:17:16.248 "data_size": 63488 00:17:16.248 }, 00:17:16.248 { 00:17:16.248 "name": "BaseBdev3", 00:17:16.248 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:16.248 "is_configured": true, 00:17:16.248 "data_offset": 2048, 00:17:16.248 "data_size": 63488 00:17:16.248 } 00:17:16.248 ] 00:17:16.248 }' 00:17:16.248 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.248 03:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.506 [2024-11-05 03:28:30.112321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.506 [2024-11-05 03:28:30.112368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.506 [2024-11-05 03:28:30.112474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.506 [2024-11-05 03:28:30.112577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.506 [2024-11-05 03:28:30.112602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.506 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.765 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:17.023 /dev/nbd0 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.023 1+0 records in 00:17:17.023 1+0 records out 00:17:17.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326695 s, 12.5 MB/s 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.023 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:17.282 /dev/nbd1 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.282 1+0 records in 00:17:17.282 1+0 records out 00:17:17.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394005 s, 10.4 MB/s 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.282 03:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:17.541 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:17.541 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.541 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.541 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.541 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:17.541 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.541 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.799 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.057 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.057 [2024-11-05 03:28:31.648654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.057 [2024-11-05 03:28:31.648771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.057 [2024-11-05 03:28:31.648815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:18.057 [2024-11-05 03:28:31.648833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.057 [2024-11-05 03:28:31.651808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.058 [2024-11-05 03:28:31.651977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.058 [2024-11-05 03:28:31.652166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:18.058 [2024-11-05 03:28:31.652250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.058 [2024-11-05 03:28:31.652455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.058 [2024-11-05 03:28:31.652602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.058 spare 00:17:18.058 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.058 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:18.058 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.058 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.316 [2024-11-05 03:28:31.752776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:18.316 [2024-11-05 03:28:31.752856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:18.316 [2024-11-05 03:28:31.753293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:18.316 [2024-11-05 03:28:31.758230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:18.316 [2024-11-05 03:28:31.758262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:18.316 [2024-11-05 03:28:31.758569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.316 "name": "raid_bdev1", 00:17:18.316 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:18.316 "strip_size_kb": 64, 00:17:18.316 "state": "online", 00:17:18.316 "raid_level": "raid5f", 00:17:18.316 "superblock": true, 00:17:18.316 "num_base_bdevs": 3, 00:17:18.316 "num_base_bdevs_discovered": 3, 00:17:18.316 "num_base_bdevs_operational": 3, 00:17:18.316 "base_bdevs_list": [ 00:17:18.316 { 00:17:18.316 "name": "spare", 00:17:18.316 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:18.316 "is_configured": true, 00:17:18.316 "data_offset": 2048, 00:17:18.316 "data_size": 63488 00:17:18.316 }, 00:17:18.316 { 00:17:18.316 "name": "BaseBdev2", 00:17:18.316 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:18.316 "is_configured": true, 00:17:18.316 "data_offset": 2048, 00:17:18.316 "data_size": 63488 00:17:18.316 }, 00:17:18.316 { 00:17:18.316 "name": "BaseBdev3", 00:17:18.316 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:18.316 "is_configured": true, 00:17:18.316 "data_offset": 2048, 00:17:18.316 "data_size": 63488 00:17:18.316 } 00:17:18.316 ] 00:17:18.316 }' 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.316 03:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.883 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.883 "name": "raid_bdev1", 00:17:18.884 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:18.884 "strip_size_kb": 64, 00:17:18.884 "state": "online", 00:17:18.884 "raid_level": "raid5f", 00:17:18.884 "superblock": true, 00:17:18.884 "num_base_bdevs": 3, 00:17:18.884 "num_base_bdevs_discovered": 3, 00:17:18.884 "num_base_bdevs_operational": 3, 00:17:18.884 "base_bdevs_list": [ 00:17:18.884 { 00:17:18.884 "name": "spare", 00:17:18.884 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:18.884 "is_configured": true, 00:17:18.884 "data_offset": 2048, 00:17:18.884 "data_size": 63488 00:17:18.884 }, 00:17:18.884 { 00:17:18.884 "name": "BaseBdev2", 00:17:18.884 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:18.884 "is_configured": true, 00:17:18.884 "data_offset": 2048, 00:17:18.884 "data_size": 63488 00:17:18.884 }, 00:17:18.884 { 00:17:18.884 "name": "BaseBdev3", 00:17:18.884 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:18.884 "is_configured": true, 00:17:18.884 "data_offset": 2048, 00:17:18.884 "data_size": 63488 00:17:18.884 } 00:17:18.884 ] 00:17:18.884 }' 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.884 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.884 [2024-11-05 03:28:32.520351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.142 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.142 "name": "raid_bdev1", 00:17:19.142 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:19.142 "strip_size_kb": 64, 00:17:19.142 "state": "online", 00:17:19.142 "raid_level": "raid5f", 00:17:19.142 "superblock": true, 00:17:19.142 "num_base_bdevs": 3, 00:17:19.142 "num_base_bdevs_discovered": 2, 00:17:19.142 "num_base_bdevs_operational": 2, 00:17:19.143 "base_bdevs_list": [ 00:17:19.143 { 00:17:19.143 "name": null, 00:17:19.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.143 "is_configured": false, 00:17:19.143 "data_offset": 0, 00:17:19.143 "data_size": 63488 00:17:19.143 }, 00:17:19.143 { 00:17:19.143 "name": "BaseBdev2", 00:17:19.143 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:19.143 "is_configured": true, 00:17:19.143 "data_offset": 2048, 00:17:19.143 "data_size": 63488 00:17:19.143 }, 00:17:19.143 { 00:17:19.143 "name": "BaseBdev3", 00:17:19.143 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:19.143 "is_configured": true, 00:17:19.143 "data_offset": 2048, 00:17:19.143 "data_size": 63488 00:17:19.143 } 00:17:19.143 ] 00:17:19.143 }' 00:17:19.143 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.143 03:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.710 03:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.710 03:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.710 03:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.710 [2024-11-05 03:28:33.088596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.710 [2024-11-05 03:28:33.088868] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.710 [2024-11-05 03:28:33.088903] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.710 [2024-11-05 03:28:33.088961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.710 [2024-11-05 03:28:33.106706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:19.710 03:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.710 03:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:19.710 [2024-11-05 03:28:33.115663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.647 "name": "raid_bdev1", 00:17:20.647 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:20.647 "strip_size_kb": 64, 00:17:20.647 "state": "online", 00:17:20.647 "raid_level": "raid5f", 00:17:20.647 "superblock": true, 00:17:20.647 "num_base_bdevs": 3, 00:17:20.647 "num_base_bdevs_discovered": 3, 00:17:20.647 "num_base_bdevs_operational": 3, 00:17:20.647 "process": { 00:17:20.647 "type": "rebuild", 00:17:20.647 "target": "spare", 00:17:20.647 "progress": { 00:17:20.647 "blocks": 18432, 00:17:20.647 "percent": 14 00:17:20.647 } 00:17:20.647 }, 00:17:20.647 "base_bdevs_list": [ 00:17:20.647 { 00:17:20.647 "name": "spare", 00:17:20.647 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:20.647 "is_configured": true, 00:17:20.647 "data_offset": 2048, 00:17:20.647 "data_size": 63488 00:17:20.647 }, 00:17:20.647 { 00:17:20.647 "name": "BaseBdev2", 00:17:20.647 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:20.647 "is_configured": true, 00:17:20.647 "data_offset": 2048, 00:17:20.647 "data_size": 63488 00:17:20.647 }, 00:17:20.647 { 00:17:20.647 "name": "BaseBdev3", 00:17:20.647 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:20.647 "is_configured": true, 00:17:20.647 "data_offset": 2048, 00:17:20.647 "data_size": 63488 00:17:20.647 } 00:17:20.647 ] 00:17:20.647 }' 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.647 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.647 [2024-11-05 03:28:34.273702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.906 [2024-11-05 03:28:34.331120] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:20.906 [2024-11-05 03:28:34.331229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.906 [2024-11-05 03:28:34.331255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.906 [2024-11-05 03:28:34.331271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.906 "name": "raid_bdev1", 00:17:20.906 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:20.906 "strip_size_kb": 64, 00:17:20.906 "state": "online", 00:17:20.906 "raid_level": "raid5f", 00:17:20.906 "superblock": true, 00:17:20.906 "num_base_bdevs": 3, 00:17:20.906 "num_base_bdevs_discovered": 2, 00:17:20.906 "num_base_bdevs_operational": 2, 00:17:20.906 "base_bdevs_list": [ 00:17:20.906 { 00:17:20.906 "name": null, 00:17:20.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.906 "is_configured": false, 00:17:20.906 "data_offset": 0, 00:17:20.906 "data_size": 63488 00:17:20.906 }, 00:17:20.906 { 00:17:20.906 "name": "BaseBdev2", 00:17:20.906 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:20.906 "is_configured": true, 00:17:20.906 "data_offset": 2048, 00:17:20.906 "data_size": 63488 00:17:20.906 }, 00:17:20.906 { 00:17:20.906 "name": "BaseBdev3", 00:17:20.906 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:20.906 "is_configured": true, 00:17:20.906 "data_offset": 2048, 00:17:20.906 "data_size": 63488 00:17:20.906 } 00:17:20.906 ] 00:17:20.906 }' 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.906 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.475 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:21.475 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.475 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.475 [2024-11-05 03:28:34.871978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:21.475 [2024-11-05 03:28:34.872061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.475 [2024-11-05 03:28:34.872091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:21.475 [2024-11-05 03:28:34.872113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.475 [2024-11-05 03:28:34.872747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.475 [2024-11-05 03:28:34.872797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:21.475 [2024-11-05 03:28:34.872923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:21.475 [2024-11-05 03:28:34.872948] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.475 [2024-11-05 03:28:34.872963] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:21.475 [2024-11-05 03:28:34.872998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.475 [2024-11-05 03:28:34.888216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:21.475 spare 00:17:21.475 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.475 03:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:21.475 [2024-11-05 03:28:34.895460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.413 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.414 "name": "raid_bdev1", 00:17:22.414 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:22.414 "strip_size_kb": 64, 00:17:22.414 "state": "online", 00:17:22.414 "raid_level": "raid5f", 00:17:22.414 "superblock": true, 00:17:22.414 "num_base_bdevs": 3, 00:17:22.414 "num_base_bdevs_discovered": 3, 00:17:22.414 "num_base_bdevs_operational": 3, 00:17:22.414 "process": { 00:17:22.414 "type": "rebuild", 00:17:22.414 "target": "spare", 00:17:22.414 "progress": { 00:17:22.414 "blocks": 18432, 00:17:22.414 "percent": 14 00:17:22.414 } 00:17:22.414 }, 00:17:22.414 "base_bdevs_list": [ 00:17:22.414 { 00:17:22.414 "name": "spare", 00:17:22.414 "uuid": "f5ec9253-4137-514c-85c6-34f6eb6cee8d", 00:17:22.414 "is_configured": true, 00:17:22.414 "data_offset": 2048, 00:17:22.414 "data_size": 63488 00:17:22.414 }, 00:17:22.414 { 00:17:22.414 "name": "BaseBdev2", 00:17:22.414 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:22.414 "is_configured": true, 00:17:22.414 "data_offset": 2048, 00:17:22.414 "data_size": 63488 00:17:22.414 }, 00:17:22.414 { 00:17:22.414 "name": "BaseBdev3", 00:17:22.414 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:22.414 "is_configured": true, 00:17:22.414 "data_offset": 2048, 00:17:22.414 "data_size": 63488 00:17:22.414 } 00:17:22.414 ] 00:17:22.414 }' 00:17:22.414 03:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.414 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.414 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.673 [2024-11-05 03:28:36.066405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.673 [2024-11-05 03:28:36.110686] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.673 [2024-11-05 03:28:36.111009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.673 [2024-11-05 03:28:36.111154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.673 [2024-11-05 03:28:36.111206] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.673 "name": "raid_bdev1", 00:17:22.673 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:22.673 "strip_size_kb": 64, 00:17:22.673 "state": "online", 00:17:22.673 "raid_level": "raid5f", 00:17:22.673 "superblock": true, 00:17:22.673 "num_base_bdevs": 3, 00:17:22.673 "num_base_bdevs_discovered": 2, 00:17:22.673 "num_base_bdevs_operational": 2, 00:17:22.673 "base_bdevs_list": [ 00:17:22.673 { 00:17:22.673 "name": null, 00:17:22.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.673 "is_configured": false, 00:17:22.673 "data_offset": 0, 00:17:22.673 "data_size": 63488 00:17:22.673 }, 00:17:22.673 { 00:17:22.673 "name": "BaseBdev2", 00:17:22.673 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:22.673 "is_configured": true, 00:17:22.673 "data_offset": 2048, 00:17:22.673 "data_size": 63488 00:17:22.673 }, 00:17:22.673 { 00:17:22.673 "name": "BaseBdev3", 00:17:22.673 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:22.673 "is_configured": true, 00:17:22.673 "data_offset": 2048, 00:17:22.673 "data_size": 63488 00:17:22.673 } 00:17:22.673 ] 00:17:22.673 }' 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.673 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.242 "name": "raid_bdev1", 00:17:23.242 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:23.242 "strip_size_kb": 64, 00:17:23.242 "state": "online", 00:17:23.242 "raid_level": "raid5f", 00:17:23.242 "superblock": true, 00:17:23.242 "num_base_bdevs": 3, 00:17:23.242 "num_base_bdevs_discovered": 2, 00:17:23.242 "num_base_bdevs_operational": 2, 00:17:23.242 "base_bdevs_list": [ 00:17:23.242 { 00:17:23.242 "name": null, 00:17:23.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.242 "is_configured": false, 00:17:23.242 "data_offset": 0, 00:17:23.242 "data_size": 63488 00:17:23.242 }, 00:17:23.242 { 00:17:23.242 "name": "BaseBdev2", 00:17:23.242 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:23.242 "is_configured": true, 00:17:23.242 "data_offset": 2048, 00:17:23.242 "data_size": 63488 00:17:23.242 }, 00:17:23.242 { 00:17:23.242 "name": "BaseBdev3", 00:17:23.242 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:23.242 "is_configured": true, 00:17:23.242 "data_offset": 2048, 00:17:23.242 "data_size": 63488 00:17:23.242 } 00:17:23.242 ] 00:17:23.242 }' 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.242 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.503 [2024-11-05 03:28:36.880638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.503 [2024-11-05 03:28:36.880719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.503 [2024-11-05 03:28:36.880766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:23.503 [2024-11-05 03:28:36.880785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.503 [2024-11-05 03:28:36.881487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.503 [2024-11-05 03:28:36.881515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.503 [2024-11-05 03:28:36.881626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:23.503 [2024-11-05 03:28:36.881649] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.503 [2024-11-05 03:28:36.881681] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:23.503 [2024-11-05 03:28:36.881706] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:23.503 BaseBdev1 00:17:23.503 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.503 03:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.440 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.440 "name": "raid_bdev1", 00:17:24.440 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:24.440 "strip_size_kb": 64, 00:17:24.440 "state": "online", 00:17:24.440 "raid_level": "raid5f", 00:17:24.440 "superblock": true, 00:17:24.440 "num_base_bdevs": 3, 00:17:24.440 "num_base_bdevs_discovered": 2, 00:17:24.440 "num_base_bdevs_operational": 2, 00:17:24.440 "base_bdevs_list": [ 00:17:24.440 { 00:17:24.440 "name": null, 00:17:24.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.440 "is_configured": false, 00:17:24.440 "data_offset": 0, 00:17:24.440 "data_size": 63488 00:17:24.440 }, 00:17:24.440 { 00:17:24.441 "name": "BaseBdev2", 00:17:24.441 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:24.441 "is_configured": true, 00:17:24.441 "data_offset": 2048, 00:17:24.441 "data_size": 63488 00:17:24.441 }, 00:17:24.441 { 00:17:24.441 "name": "BaseBdev3", 00:17:24.441 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:24.441 "is_configured": true, 00:17:24.441 "data_offset": 2048, 00:17:24.441 "data_size": 63488 00:17:24.441 } 00:17:24.441 ] 00:17:24.441 }' 00:17:24.441 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.441 03:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.009 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.009 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.010 "name": "raid_bdev1", 00:17:25.010 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:25.010 "strip_size_kb": 64, 00:17:25.010 "state": "online", 00:17:25.010 "raid_level": "raid5f", 00:17:25.010 "superblock": true, 00:17:25.010 "num_base_bdevs": 3, 00:17:25.010 "num_base_bdevs_discovered": 2, 00:17:25.010 "num_base_bdevs_operational": 2, 00:17:25.010 "base_bdevs_list": [ 00:17:25.010 { 00:17:25.010 "name": null, 00:17:25.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.010 "is_configured": false, 00:17:25.010 "data_offset": 0, 00:17:25.010 "data_size": 63488 00:17:25.010 }, 00:17:25.010 { 00:17:25.010 "name": "BaseBdev2", 00:17:25.010 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:25.010 "is_configured": true, 00:17:25.010 "data_offset": 2048, 00:17:25.010 "data_size": 63488 00:17:25.010 }, 00:17:25.010 { 00:17:25.010 "name": "BaseBdev3", 00:17:25.010 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:25.010 "is_configured": true, 00:17:25.010 "data_offset": 2048, 00:17:25.010 "data_size": 63488 00:17:25.010 } 00:17:25.010 ] 00:17:25.010 }' 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.010 [2024-11-05 03:28:38.577281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.010 [2024-11-05 03:28:38.577521] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.010 [2024-11-05 03:28:38.577547] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.010 request: 00:17:25.010 { 00:17:25.010 "base_bdev": "BaseBdev1", 00:17:25.010 "raid_bdev": "raid_bdev1", 00:17:25.010 "method": "bdev_raid_add_base_bdev", 00:17:25.010 "req_id": 1 00:17:25.010 } 00:17:25.010 Got JSON-RPC error response 00:17:25.010 response: 00:17:25.010 { 00:17:25.010 "code": -22, 00:17:25.010 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:25.010 } 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:25.010 03:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.388 "name": "raid_bdev1", 00:17:26.388 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:26.388 "strip_size_kb": 64, 00:17:26.388 "state": "online", 00:17:26.388 "raid_level": "raid5f", 00:17:26.388 "superblock": true, 00:17:26.388 "num_base_bdevs": 3, 00:17:26.388 "num_base_bdevs_discovered": 2, 00:17:26.388 "num_base_bdevs_operational": 2, 00:17:26.388 "base_bdevs_list": [ 00:17:26.388 { 00:17:26.388 "name": null, 00:17:26.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.388 "is_configured": false, 00:17:26.388 "data_offset": 0, 00:17:26.388 "data_size": 63488 00:17:26.388 }, 00:17:26.388 { 00:17:26.388 "name": "BaseBdev2", 00:17:26.388 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:26.388 "is_configured": true, 00:17:26.388 "data_offset": 2048, 00:17:26.388 "data_size": 63488 00:17:26.388 }, 00:17:26.388 { 00:17:26.388 "name": "BaseBdev3", 00:17:26.388 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:26.388 "is_configured": true, 00:17:26.388 "data_offset": 2048, 00:17:26.388 "data_size": 63488 00:17:26.388 } 00:17:26.388 ] 00:17:26.388 }' 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.388 03:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.646 "name": "raid_bdev1", 00:17:26.646 "uuid": "0e615e89-2497-405a-ab19-0c064cc84a6c", 00:17:26.646 "strip_size_kb": 64, 00:17:26.646 "state": "online", 00:17:26.646 "raid_level": "raid5f", 00:17:26.646 "superblock": true, 00:17:26.646 "num_base_bdevs": 3, 00:17:26.646 "num_base_bdevs_discovered": 2, 00:17:26.646 "num_base_bdevs_operational": 2, 00:17:26.646 "base_bdevs_list": [ 00:17:26.646 { 00:17:26.646 "name": null, 00:17:26.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.646 "is_configured": false, 00:17:26.646 "data_offset": 0, 00:17:26.646 "data_size": 63488 00:17:26.646 }, 00:17:26.646 { 00:17:26.646 "name": "BaseBdev2", 00:17:26.646 "uuid": "395ce09c-ffff-5260-8096-329125cc1516", 00:17:26.646 "is_configured": true, 00:17:26.646 "data_offset": 2048, 00:17:26.646 "data_size": 63488 00:17:26.646 }, 00:17:26.646 { 00:17:26.646 "name": "BaseBdev3", 00:17:26.646 "uuid": "f87c5f92-10bc-59cb-86db-701c31463e0e", 00:17:26.646 "is_configured": true, 00:17:26.646 "data_offset": 2048, 00:17:26.646 "data_size": 63488 00:17:26.646 } 00:17:26.646 ] 00:17:26.646 }' 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82143 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82143 ']' 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82143 00:17:26.646 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:26.905 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:26.905 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82143 00:17:26.905 killing process with pid 82143 00:17:26.905 Received shutdown signal, test time was about 60.000000 seconds 00:17:26.905 00:17:26.905 Latency(us) 00:17:26.905 [2024-11-05T03:28:40.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.905 [2024-11-05T03:28:40.544Z] =================================================================================================================== 00:17:26.905 [2024-11-05T03:28:40.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:26.905 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:26.905 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:26.905 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82143' 00:17:26.905 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82143 00:17:26.905 [2024-11-05 03:28:40.309388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.905 03:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82143 00:17:26.905 [2024-11-05 03:28:40.309538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.905 [2024-11-05 03:28:40.309622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.906 [2024-11-05 03:28:40.309642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:27.165 [2024-11-05 03:28:40.681841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.127 03:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:28.127 00:17:28.127 real 0m24.983s 00:17:28.127 user 0m33.344s 00:17:28.127 sys 0m2.628s 00:17:28.127 03:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:28.127 03:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.127 ************************************ 00:17:28.127 END TEST raid5f_rebuild_test_sb 00:17:28.127 ************************************ 00:17:28.387 03:28:41 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:28.387 03:28:41 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:28.387 03:28:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:28.387 03:28:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:28.387 03:28:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.387 ************************************ 00:17:28.387 START TEST raid5f_state_function_test 00:17:28.387 ************************************ 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82910 00:17:28.387 Process raid pid: 82910 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82910' 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82910 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 82910 ']' 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:28.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:28.387 03:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.387 [2024-11-05 03:28:41.919045] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:17:28.387 [2024-11-05 03:28:41.919945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.646 [2024-11-05 03:28:42.112849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.646 [2024-11-05 03:28:42.274739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.905 [2024-11-05 03:28:42.512260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.905 [2024-11-05 03:28:42.512335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.473 [2024-11-05 03:28:42.973954] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.473 [2024-11-05 03:28:42.974012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.473 [2024-11-05 03:28:42.974028] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.473 [2024-11-05 03:28:42.974044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.473 [2024-11-05 03:28:42.974054] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:29.473 [2024-11-05 03:28:42.974069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:29.473 [2024-11-05 03:28:42.974079] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:29.473 [2024-11-05 03:28:42.974092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.473 03:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.473 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.474 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.474 "name": "Existed_Raid", 00:17:29.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.474 "strip_size_kb": 64, 00:17:29.474 "state": "configuring", 00:17:29.474 "raid_level": "raid5f", 00:17:29.474 "superblock": false, 00:17:29.474 "num_base_bdevs": 4, 00:17:29.474 "num_base_bdevs_discovered": 0, 00:17:29.474 "num_base_bdevs_operational": 4, 00:17:29.474 "base_bdevs_list": [ 00:17:29.474 { 00:17:29.474 "name": "BaseBdev1", 00:17:29.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.474 "is_configured": false, 00:17:29.474 "data_offset": 0, 00:17:29.474 "data_size": 0 00:17:29.474 }, 00:17:29.474 { 00:17:29.474 "name": "BaseBdev2", 00:17:29.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.474 "is_configured": false, 00:17:29.474 "data_offset": 0, 00:17:29.474 "data_size": 0 00:17:29.474 }, 00:17:29.474 { 00:17:29.474 "name": "BaseBdev3", 00:17:29.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.474 "is_configured": false, 00:17:29.474 "data_offset": 0, 00:17:29.474 "data_size": 0 00:17:29.474 }, 00:17:29.474 { 00:17:29.474 "name": "BaseBdev4", 00:17:29.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.474 "is_configured": false, 00:17:29.474 "data_offset": 0, 00:17:29.474 "data_size": 0 00:17:29.474 } 00:17:29.474 ] 00:17:29.474 }' 00:17:29.474 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.474 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 [2024-11-05 03:28:43.522098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.045 [2024-11-05 03:28:43.522158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 [2024-11-05 03:28:43.534118] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.045 [2024-11-05 03:28:43.534204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.045 [2024-11-05 03:28:43.534219] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.045 [2024-11-05 03:28:43.534251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.045 [2024-11-05 03:28:43.534261] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.045 [2024-11-05 03:28:43.534275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.045 [2024-11-05 03:28:43.534285] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.045 [2024-11-05 03:28:43.534298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 [2024-11-05 03:28:43.584733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.045 BaseBdev1 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.045 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 [ 00:17:30.045 { 00:17:30.045 "name": "BaseBdev1", 00:17:30.046 "aliases": [ 00:17:30.046 "03add45f-4273-4f4f-be21-074e2e039a1f" 00:17:30.046 ], 00:17:30.046 "product_name": "Malloc disk", 00:17:30.046 "block_size": 512, 00:17:30.046 "num_blocks": 65536, 00:17:30.046 "uuid": "03add45f-4273-4f4f-be21-074e2e039a1f", 00:17:30.046 "assigned_rate_limits": { 00:17:30.046 "rw_ios_per_sec": 0, 00:17:30.046 "rw_mbytes_per_sec": 0, 00:17:30.046 "r_mbytes_per_sec": 0, 00:17:30.046 "w_mbytes_per_sec": 0 00:17:30.046 }, 00:17:30.046 "claimed": true, 00:17:30.046 "claim_type": "exclusive_write", 00:17:30.046 "zoned": false, 00:17:30.046 "supported_io_types": { 00:17:30.046 "read": true, 00:17:30.046 "write": true, 00:17:30.046 "unmap": true, 00:17:30.046 "flush": true, 00:17:30.046 "reset": true, 00:17:30.046 "nvme_admin": false, 00:17:30.046 "nvme_io": false, 00:17:30.046 "nvme_io_md": false, 00:17:30.046 "write_zeroes": true, 00:17:30.046 "zcopy": true, 00:17:30.046 "get_zone_info": false, 00:17:30.046 "zone_management": false, 00:17:30.046 "zone_append": false, 00:17:30.046 "compare": false, 00:17:30.046 "compare_and_write": false, 00:17:30.046 "abort": true, 00:17:30.046 "seek_hole": false, 00:17:30.046 "seek_data": false, 00:17:30.046 "copy": true, 00:17:30.046 "nvme_iov_md": false 00:17:30.046 }, 00:17:30.046 "memory_domains": [ 00:17:30.046 { 00:17:30.046 "dma_device_id": "system", 00:17:30.046 "dma_device_type": 1 00:17:30.046 }, 00:17:30.046 { 00:17:30.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.046 "dma_device_type": 2 00:17:30.046 } 00:17:30.046 ], 00:17:30.046 "driver_specific": {} 00:17:30.046 } 00:17:30.046 ] 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.046 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.305 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.305 "name": "Existed_Raid", 00:17:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.305 "strip_size_kb": 64, 00:17:30.305 "state": "configuring", 00:17:30.305 "raid_level": "raid5f", 00:17:30.305 "superblock": false, 00:17:30.305 "num_base_bdevs": 4, 00:17:30.305 "num_base_bdevs_discovered": 1, 00:17:30.305 "num_base_bdevs_operational": 4, 00:17:30.305 "base_bdevs_list": [ 00:17:30.305 { 00:17:30.305 "name": "BaseBdev1", 00:17:30.305 "uuid": "03add45f-4273-4f4f-be21-074e2e039a1f", 00:17:30.305 "is_configured": true, 00:17:30.305 "data_offset": 0, 00:17:30.305 "data_size": 65536 00:17:30.305 }, 00:17:30.305 { 00:17:30.305 "name": "BaseBdev2", 00:17:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.305 "is_configured": false, 00:17:30.305 "data_offset": 0, 00:17:30.305 "data_size": 0 00:17:30.305 }, 00:17:30.305 { 00:17:30.305 "name": "BaseBdev3", 00:17:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.305 "is_configured": false, 00:17:30.305 "data_offset": 0, 00:17:30.305 "data_size": 0 00:17:30.305 }, 00:17:30.305 { 00:17:30.305 "name": "BaseBdev4", 00:17:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.305 "is_configured": false, 00:17:30.305 "data_offset": 0, 00:17:30.305 "data_size": 0 00:17:30.305 } 00:17:30.305 ] 00:17:30.305 }' 00:17:30.305 03:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.305 03:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.565 [2024-11-05 03:28:44.165090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.565 [2024-11-05 03:28:44.165155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.565 [2024-11-05 03:28:44.173147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.565 [2024-11-05 03:28:44.175800] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.565 [2024-11-05 03:28:44.175849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.565 [2024-11-05 03:28:44.175866] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.565 [2024-11-05 03:28:44.175883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.565 [2024-11-05 03:28:44.175894] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.565 [2024-11-05 03:28:44.175908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.565 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.824 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.824 "name": "Existed_Raid", 00:17:30.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.824 "strip_size_kb": 64, 00:17:30.824 "state": "configuring", 00:17:30.824 "raid_level": "raid5f", 00:17:30.824 "superblock": false, 00:17:30.824 "num_base_bdevs": 4, 00:17:30.824 "num_base_bdevs_discovered": 1, 00:17:30.824 "num_base_bdevs_operational": 4, 00:17:30.824 "base_bdevs_list": [ 00:17:30.824 { 00:17:30.824 "name": "BaseBdev1", 00:17:30.824 "uuid": "03add45f-4273-4f4f-be21-074e2e039a1f", 00:17:30.824 "is_configured": true, 00:17:30.824 "data_offset": 0, 00:17:30.824 "data_size": 65536 00:17:30.824 }, 00:17:30.824 { 00:17:30.824 "name": "BaseBdev2", 00:17:30.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.824 "is_configured": false, 00:17:30.824 "data_offset": 0, 00:17:30.824 "data_size": 0 00:17:30.824 }, 00:17:30.824 { 00:17:30.824 "name": "BaseBdev3", 00:17:30.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.824 "is_configured": false, 00:17:30.824 "data_offset": 0, 00:17:30.824 "data_size": 0 00:17:30.824 }, 00:17:30.824 { 00:17:30.824 "name": "BaseBdev4", 00:17:30.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.824 "is_configured": false, 00:17:30.824 "data_offset": 0, 00:17:30.824 "data_size": 0 00:17:30.824 } 00:17:30.824 ] 00:17:30.824 }' 00:17:30.824 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.824 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.083 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.083 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.083 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.343 [2024-11-05 03:28:44.753682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.343 BaseBdev2 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.343 [ 00:17:31.343 { 00:17:31.343 "name": "BaseBdev2", 00:17:31.343 "aliases": [ 00:17:31.343 "63187f3d-4c7a-4c9b-a11c-af643e2d9ed1" 00:17:31.343 ], 00:17:31.343 "product_name": "Malloc disk", 00:17:31.343 "block_size": 512, 00:17:31.343 "num_blocks": 65536, 00:17:31.343 "uuid": "63187f3d-4c7a-4c9b-a11c-af643e2d9ed1", 00:17:31.343 "assigned_rate_limits": { 00:17:31.343 "rw_ios_per_sec": 0, 00:17:31.343 "rw_mbytes_per_sec": 0, 00:17:31.343 "r_mbytes_per_sec": 0, 00:17:31.343 "w_mbytes_per_sec": 0 00:17:31.343 }, 00:17:31.343 "claimed": true, 00:17:31.343 "claim_type": "exclusive_write", 00:17:31.343 "zoned": false, 00:17:31.343 "supported_io_types": { 00:17:31.343 "read": true, 00:17:31.343 "write": true, 00:17:31.343 "unmap": true, 00:17:31.343 "flush": true, 00:17:31.343 "reset": true, 00:17:31.343 "nvme_admin": false, 00:17:31.343 "nvme_io": false, 00:17:31.343 "nvme_io_md": false, 00:17:31.343 "write_zeroes": true, 00:17:31.343 "zcopy": true, 00:17:31.343 "get_zone_info": false, 00:17:31.343 "zone_management": false, 00:17:31.343 "zone_append": false, 00:17:31.343 "compare": false, 00:17:31.343 "compare_and_write": false, 00:17:31.343 "abort": true, 00:17:31.343 "seek_hole": false, 00:17:31.343 "seek_data": false, 00:17:31.343 "copy": true, 00:17:31.343 "nvme_iov_md": false 00:17:31.343 }, 00:17:31.343 "memory_domains": [ 00:17:31.343 { 00:17:31.343 "dma_device_id": "system", 00:17:31.343 "dma_device_type": 1 00:17:31.343 }, 00:17:31.343 { 00:17:31.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.343 "dma_device_type": 2 00:17:31.343 } 00:17:31.343 ], 00:17:31.343 "driver_specific": {} 00:17:31.343 } 00:17:31.343 ] 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.343 "name": "Existed_Raid", 00:17:31.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.343 "strip_size_kb": 64, 00:17:31.343 "state": "configuring", 00:17:31.343 "raid_level": "raid5f", 00:17:31.343 "superblock": false, 00:17:31.343 "num_base_bdevs": 4, 00:17:31.343 "num_base_bdevs_discovered": 2, 00:17:31.343 "num_base_bdevs_operational": 4, 00:17:31.343 "base_bdevs_list": [ 00:17:31.343 { 00:17:31.343 "name": "BaseBdev1", 00:17:31.343 "uuid": "03add45f-4273-4f4f-be21-074e2e039a1f", 00:17:31.343 "is_configured": true, 00:17:31.343 "data_offset": 0, 00:17:31.343 "data_size": 65536 00:17:31.343 }, 00:17:31.343 { 00:17:31.343 "name": "BaseBdev2", 00:17:31.343 "uuid": "63187f3d-4c7a-4c9b-a11c-af643e2d9ed1", 00:17:31.343 "is_configured": true, 00:17:31.343 "data_offset": 0, 00:17:31.343 "data_size": 65536 00:17:31.343 }, 00:17:31.343 { 00:17:31.343 "name": "BaseBdev3", 00:17:31.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.343 "is_configured": false, 00:17:31.343 "data_offset": 0, 00:17:31.343 "data_size": 0 00:17:31.343 }, 00:17:31.343 { 00:17:31.343 "name": "BaseBdev4", 00:17:31.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.343 "is_configured": false, 00:17:31.343 "data_offset": 0, 00:17:31.343 "data_size": 0 00:17:31.343 } 00:17:31.343 ] 00:17:31.343 }' 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.343 03:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.912 [2024-11-05 03:28:45.379753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.912 BaseBdev3 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:31.912 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.913 [ 00:17:31.913 { 00:17:31.913 "name": "BaseBdev3", 00:17:31.913 "aliases": [ 00:17:31.913 "ed43207a-c3be-4c26-a0d5-a81621106b9a" 00:17:31.913 ], 00:17:31.913 "product_name": "Malloc disk", 00:17:31.913 "block_size": 512, 00:17:31.913 "num_blocks": 65536, 00:17:31.913 "uuid": "ed43207a-c3be-4c26-a0d5-a81621106b9a", 00:17:31.913 "assigned_rate_limits": { 00:17:31.913 "rw_ios_per_sec": 0, 00:17:31.913 "rw_mbytes_per_sec": 0, 00:17:31.913 "r_mbytes_per_sec": 0, 00:17:31.913 "w_mbytes_per_sec": 0 00:17:31.913 }, 00:17:31.913 "claimed": true, 00:17:31.913 "claim_type": "exclusive_write", 00:17:31.913 "zoned": false, 00:17:31.913 "supported_io_types": { 00:17:31.913 "read": true, 00:17:31.913 "write": true, 00:17:31.913 "unmap": true, 00:17:31.913 "flush": true, 00:17:31.913 "reset": true, 00:17:31.913 "nvme_admin": false, 00:17:31.913 "nvme_io": false, 00:17:31.913 "nvme_io_md": false, 00:17:31.913 "write_zeroes": true, 00:17:31.913 "zcopy": true, 00:17:31.913 "get_zone_info": false, 00:17:31.913 "zone_management": false, 00:17:31.913 "zone_append": false, 00:17:31.913 "compare": false, 00:17:31.913 "compare_and_write": false, 00:17:31.913 "abort": true, 00:17:31.913 "seek_hole": false, 00:17:31.913 "seek_data": false, 00:17:31.913 "copy": true, 00:17:31.913 "nvme_iov_md": false 00:17:31.913 }, 00:17:31.913 "memory_domains": [ 00:17:31.913 { 00:17:31.913 "dma_device_id": "system", 00:17:31.913 "dma_device_type": 1 00:17:31.913 }, 00:17:31.913 { 00:17:31.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.913 "dma_device_type": 2 00:17:31.913 } 00:17:31.913 ], 00:17:31.913 "driver_specific": {} 00:17:31.913 } 00:17:31.913 ] 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.913 "name": "Existed_Raid", 00:17:31.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.913 "strip_size_kb": 64, 00:17:31.913 "state": "configuring", 00:17:31.913 "raid_level": "raid5f", 00:17:31.913 "superblock": false, 00:17:31.913 "num_base_bdevs": 4, 00:17:31.913 "num_base_bdevs_discovered": 3, 00:17:31.913 "num_base_bdevs_operational": 4, 00:17:31.913 "base_bdevs_list": [ 00:17:31.913 { 00:17:31.913 "name": "BaseBdev1", 00:17:31.913 "uuid": "03add45f-4273-4f4f-be21-074e2e039a1f", 00:17:31.913 "is_configured": true, 00:17:31.913 "data_offset": 0, 00:17:31.913 "data_size": 65536 00:17:31.913 }, 00:17:31.913 { 00:17:31.913 "name": "BaseBdev2", 00:17:31.913 "uuid": "63187f3d-4c7a-4c9b-a11c-af643e2d9ed1", 00:17:31.913 "is_configured": true, 00:17:31.913 "data_offset": 0, 00:17:31.913 "data_size": 65536 00:17:31.913 }, 00:17:31.913 { 00:17:31.913 "name": "BaseBdev3", 00:17:31.913 "uuid": "ed43207a-c3be-4c26-a0d5-a81621106b9a", 00:17:31.913 "is_configured": true, 00:17:31.913 "data_offset": 0, 00:17:31.913 "data_size": 65536 00:17:31.913 }, 00:17:31.913 { 00:17:31.913 "name": "BaseBdev4", 00:17:31.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.913 "is_configured": false, 00:17:31.913 "data_offset": 0, 00:17:31.913 "data_size": 0 00:17:31.913 } 00:17:31.913 ] 00:17:31.913 }' 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.913 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.482 [2024-11-05 03:28:45.975905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:32.482 [2024-11-05 03:28:45.975999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:32.482 [2024-11-05 03:28:45.976014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:32.482 [2024-11-05 03:28:45.976406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:32.482 [2024-11-05 03:28:45.983396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:32.482 [2024-11-05 03:28:45.983425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:32.482 [2024-11-05 03:28:45.983780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.482 BaseBdev4 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.482 03:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.482 [ 00:17:32.482 { 00:17:32.482 "name": "BaseBdev4", 00:17:32.482 "aliases": [ 00:17:32.482 "8d46d61d-d415-4618-8cae-3dc2e62ab7e4" 00:17:32.482 ], 00:17:32.482 "product_name": "Malloc disk", 00:17:32.482 "block_size": 512, 00:17:32.482 "num_blocks": 65536, 00:17:32.482 "uuid": "8d46d61d-d415-4618-8cae-3dc2e62ab7e4", 00:17:32.482 "assigned_rate_limits": { 00:17:32.482 "rw_ios_per_sec": 0, 00:17:32.482 "rw_mbytes_per_sec": 0, 00:17:32.482 "r_mbytes_per_sec": 0, 00:17:32.482 "w_mbytes_per_sec": 0 00:17:32.482 }, 00:17:32.482 "claimed": true, 00:17:32.482 "claim_type": "exclusive_write", 00:17:32.482 "zoned": false, 00:17:32.482 "supported_io_types": { 00:17:32.482 "read": true, 00:17:32.482 "write": true, 00:17:32.482 "unmap": true, 00:17:32.482 "flush": true, 00:17:32.482 "reset": true, 00:17:32.482 "nvme_admin": false, 00:17:32.482 "nvme_io": false, 00:17:32.482 "nvme_io_md": false, 00:17:32.482 "write_zeroes": true, 00:17:32.482 "zcopy": true, 00:17:32.482 "get_zone_info": false, 00:17:32.482 "zone_management": false, 00:17:32.482 "zone_append": false, 00:17:32.482 "compare": false, 00:17:32.482 "compare_and_write": false, 00:17:32.482 "abort": true, 00:17:32.482 "seek_hole": false, 00:17:32.482 "seek_data": false, 00:17:32.482 "copy": true, 00:17:32.482 "nvme_iov_md": false 00:17:32.482 }, 00:17:32.482 "memory_domains": [ 00:17:32.482 { 00:17:32.482 "dma_device_id": "system", 00:17:32.482 "dma_device_type": 1 00:17:32.482 }, 00:17:32.482 { 00:17:32.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.482 "dma_device_type": 2 00:17:32.482 } 00:17:32.482 ], 00:17:32.482 "driver_specific": {} 00:17:32.482 } 00:17:32.482 ] 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.482 "name": "Existed_Raid", 00:17:32.482 "uuid": "c340c247-fc9a-4afb-a03b-87685a9e935a", 00:17:32.482 "strip_size_kb": 64, 00:17:32.482 "state": "online", 00:17:32.482 "raid_level": "raid5f", 00:17:32.482 "superblock": false, 00:17:32.482 "num_base_bdevs": 4, 00:17:32.482 "num_base_bdevs_discovered": 4, 00:17:32.482 "num_base_bdevs_operational": 4, 00:17:32.482 "base_bdevs_list": [ 00:17:32.482 { 00:17:32.482 "name": "BaseBdev1", 00:17:32.482 "uuid": "03add45f-4273-4f4f-be21-074e2e039a1f", 00:17:32.482 "is_configured": true, 00:17:32.482 "data_offset": 0, 00:17:32.482 "data_size": 65536 00:17:32.482 }, 00:17:32.482 { 00:17:32.482 "name": "BaseBdev2", 00:17:32.482 "uuid": "63187f3d-4c7a-4c9b-a11c-af643e2d9ed1", 00:17:32.482 "is_configured": true, 00:17:32.482 "data_offset": 0, 00:17:32.482 "data_size": 65536 00:17:32.482 }, 00:17:32.482 { 00:17:32.482 "name": "BaseBdev3", 00:17:32.482 "uuid": "ed43207a-c3be-4c26-a0d5-a81621106b9a", 00:17:32.482 "is_configured": true, 00:17:32.482 "data_offset": 0, 00:17:32.482 "data_size": 65536 00:17:32.482 }, 00:17:32.482 { 00:17:32.482 "name": "BaseBdev4", 00:17:32.482 "uuid": "8d46d61d-d415-4618-8cae-3dc2e62ab7e4", 00:17:32.482 "is_configured": true, 00:17:32.482 "data_offset": 0, 00:17:32.482 "data_size": 65536 00:17:32.482 } 00:17:32.482 ] 00:17:32.482 }' 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.482 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.051 [2024-11-05 03:28:46.539970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.051 "name": "Existed_Raid", 00:17:33.051 "aliases": [ 00:17:33.051 "c340c247-fc9a-4afb-a03b-87685a9e935a" 00:17:33.051 ], 00:17:33.051 "product_name": "Raid Volume", 00:17:33.051 "block_size": 512, 00:17:33.051 "num_blocks": 196608, 00:17:33.051 "uuid": "c340c247-fc9a-4afb-a03b-87685a9e935a", 00:17:33.051 "assigned_rate_limits": { 00:17:33.051 "rw_ios_per_sec": 0, 00:17:33.051 "rw_mbytes_per_sec": 0, 00:17:33.051 "r_mbytes_per_sec": 0, 00:17:33.051 "w_mbytes_per_sec": 0 00:17:33.051 }, 00:17:33.051 "claimed": false, 00:17:33.051 "zoned": false, 00:17:33.051 "supported_io_types": { 00:17:33.051 "read": true, 00:17:33.051 "write": true, 00:17:33.051 "unmap": false, 00:17:33.051 "flush": false, 00:17:33.051 "reset": true, 00:17:33.051 "nvme_admin": false, 00:17:33.051 "nvme_io": false, 00:17:33.051 "nvme_io_md": false, 00:17:33.051 "write_zeroes": true, 00:17:33.051 "zcopy": false, 00:17:33.051 "get_zone_info": false, 00:17:33.051 "zone_management": false, 00:17:33.051 "zone_append": false, 00:17:33.051 "compare": false, 00:17:33.051 "compare_and_write": false, 00:17:33.051 "abort": false, 00:17:33.051 "seek_hole": false, 00:17:33.051 "seek_data": false, 00:17:33.051 "copy": false, 00:17:33.051 "nvme_iov_md": false 00:17:33.051 }, 00:17:33.051 "driver_specific": { 00:17:33.051 "raid": { 00:17:33.051 "uuid": "c340c247-fc9a-4afb-a03b-87685a9e935a", 00:17:33.051 "strip_size_kb": 64, 00:17:33.051 "state": "online", 00:17:33.051 "raid_level": "raid5f", 00:17:33.051 "superblock": false, 00:17:33.051 "num_base_bdevs": 4, 00:17:33.051 "num_base_bdevs_discovered": 4, 00:17:33.051 "num_base_bdevs_operational": 4, 00:17:33.051 "base_bdevs_list": [ 00:17:33.051 { 00:17:33.051 "name": "BaseBdev1", 00:17:33.051 "uuid": "03add45f-4273-4f4f-be21-074e2e039a1f", 00:17:33.051 "is_configured": true, 00:17:33.051 "data_offset": 0, 00:17:33.051 "data_size": 65536 00:17:33.051 }, 00:17:33.051 { 00:17:33.051 "name": "BaseBdev2", 00:17:33.051 "uuid": "63187f3d-4c7a-4c9b-a11c-af643e2d9ed1", 00:17:33.051 "is_configured": true, 00:17:33.051 "data_offset": 0, 00:17:33.051 "data_size": 65536 00:17:33.051 }, 00:17:33.051 { 00:17:33.051 "name": "BaseBdev3", 00:17:33.051 "uuid": "ed43207a-c3be-4c26-a0d5-a81621106b9a", 00:17:33.051 "is_configured": true, 00:17:33.051 "data_offset": 0, 00:17:33.051 "data_size": 65536 00:17:33.051 }, 00:17:33.051 { 00:17:33.051 "name": "BaseBdev4", 00:17:33.051 "uuid": "8d46d61d-d415-4618-8cae-3dc2e62ab7e4", 00:17:33.051 "is_configured": true, 00:17:33.051 "data_offset": 0, 00:17:33.051 "data_size": 65536 00:17:33.051 } 00:17:33.051 ] 00:17:33.051 } 00:17:33.051 } 00:17:33.051 }' 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:33.051 BaseBdev2 00:17:33.051 BaseBdev3 00:17:33.051 BaseBdev4' 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.051 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.311 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.311 [2024-11-05 03:28:46.903893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.571 03:28:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.571 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.571 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.571 "name": "Existed_Raid", 00:17:33.571 "uuid": "c340c247-fc9a-4afb-a03b-87685a9e935a", 00:17:33.571 "strip_size_kb": 64, 00:17:33.571 "state": "online", 00:17:33.571 "raid_level": "raid5f", 00:17:33.571 "superblock": false, 00:17:33.571 "num_base_bdevs": 4, 00:17:33.571 "num_base_bdevs_discovered": 3, 00:17:33.571 "num_base_bdevs_operational": 3, 00:17:33.571 "base_bdevs_list": [ 00:17:33.571 { 00:17:33.571 "name": null, 00:17:33.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.571 "is_configured": false, 00:17:33.571 "data_offset": 0, 00:17:33.571 "data_size": 65536 00:17:33.571 }, 00:17:33.571 { 00:17:33.571 "name": "BaseBdev2", 00:17:33.571 "uuid": "63187f3d-4c7a-4c9b-a11c-af643e2d9ed1", 00:17:33.571 "is_configured": true, 00:17:33.571 "data_offset": 0, 00:17:33.571 "data_size": 65536 00:17:33.571 }, 00:17:33.571 { 00:17:33.571 "name": "BaseBdev3", 00:17:33.571 "uuid": "ed43207a-c3be-4c26-a0d5-a81621106b9a", 00:17:33.571 "is_configured": true, 00:17:33.571 "data_offset": 0, 00:17:33.571 "data_size": 65536 00:17:33.571 }, 00:17:33.571 { 00:17:33.571 "name": "BaseBdev4", 00:17:33.571 "uuid": "8d46d61d-d415-4618-8cae-3dc2e62ab7e4", 00:17:33.571 "is_configured": true, 00:17:33.571 "data_offset": 0, 00:17:33.571 "data_size": 65536 00:17:33.571 } 00:17:33.571 ] 00:17:33.571 }' 00:17:33.571 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.571 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.139 [2024-11-05 03:28:47.572026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.139 [2024-11-05 03:28:47.572347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.139 [2024-11-05 03:28:47.655855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.139 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.139 [2024-11-05 03:28:47.719840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.398 [2024-11-05 03:28:47.868326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.398 [2024-11-05 03:28:47.868443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.398 03:28:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.398 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.658 BaseBdev2 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.658 [ 00:17:34.658 { 00:17:34.658 "name": "BaseBdev2", 00:17:34.658 "aliases": [ 00:17:34.658 "a9bd3ef7-90b5-468d-81f3-fdd893570abf" 00:17:34.658 ], 00:17:34.658 "product_name": "Malloc disk", 00:17:34.658 "block_size": 512, 00:17:34.658 "num_blocks": 65536, 00:17:34.658 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:34.658 "assigned_rate_limits": { 00:17:34.658 "rw_ios_per_sec": 0, 00:17:34.658 "rw_mbytes_per_sec": 0, 00:17:34.658 "r_mbytes_per_sec": 0, 00:17:34.658 "w_mbytes_per_sec": 0 00:17:34.658 }, 00:17:34.658 "claimed": false, 00:17:34.658 "zoned": false, 00:17:34.658 "supported_io_types": { 00:17:34.658 "read": true, 00:17:34.658 "write": true, 00:17:34.658 "unmap": true, 00:17:34.658 "flush": true, 00:17:34.658 "reset": true, 00:17:34.658 "nvme_admin": false, 00:17:34.658 "nvme_io": false, 00:17:34.658 "nvme_io_md": false, 00:17:34.658 "write_zeroes": true, 00:17:34.658 "zcopy": true, 00:17:34.658 "get_zone_info": false, 00:17:34.658 "zone_management": false, 00:17:34.658 "zone_append": false, 00:17:34.658 "compare": false, 00:17:34.658 "compare_and_write": false, 00:17:34.658 "abort": true, 00:17:34.658 "seek_hole": false, 00:17:34.658 "seek_data": false, 00:17:34.658 "copy": true, 00:17:34.658 "nvme_iov_md": false 00:17:34.658 }, 00:17:34.658 "memory_domains": [ 00:17:34.658 { 00:17:34.658 "dma_device_id": "system", 00:17:34.658 "dma_device_type": 1 00:17:34.658 }, 00:17:34.658 { 00:17:34.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.658 "dma_device_type": 2 00:17:34.658 } 00:17:34.658 ], 00:17:34.658 "driver_specific": {} 00:17:34.658 } 00:17:34.658 ] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.658 BaseBdev3 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.658 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.658 [ 00:17:34.658 { 00:17:34.658 "name": "BaseBdev3", 00:17:34.658 "aliases": [ 00:17:34.658 "15721f58-96db-4a71-b48b-9650a0a1b56a" 00:17:34.658 ], 00:17:34.659 "product_name": "Malloc disk", 00:17:34.659 "block_size": 512, 00:17:34.659 "num_blocks": 65536, 00:17:34.659 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:34.659 "assigned_rate_limits": { 00:17:34.659 "rw_ios_per_sec": 0, 00:17:34.659 "rw_mbytes_per_sec": 0, 00:17:34.659 "r_mbytes_per_sec": 0, 00:17:34.659 "w_mbytes_per_sec": 0 00:17:34.659 }, 00:17:34.659 "claimed": false, 00:17:34.659 "zoned": false, 00:17:34.659 "supported_io_types": { 00:17:34.659 "read": true, 00:17:34.659 "write": true, 00:17:34.659 "unmap": true, 00:17:34.659 "flush": true, 00:17:34.659 "reset": true, 00:17:34.659 "nvme_admin": false, 00:17:34.659 "nvme_io": false, 00:17:34.659 "nvme_io_md": false, 00:17:34.659 "write_zeroes": true, 00:17:34.659 "zcopy": true, 00:17:34.659 "get_zone_info": false, 00:17:34.659 "zone_management": false, 00:17:34.659 "zone_append": false, 00:17:34.659 "compare": false, 00:17:34.659 "compare_and_write": false, 00:17:34.659 "abort": true, 00:17:34.659 "seek_hole": false, 00:17:34.659 "seek_data": false, 00:17:34.659 "copy": true, 00:17:34.659 "nvme_iov_md": false 00:17:34.659 }, 00:17:34.659 "memory_domains": [ 00:17:34.659 { 00:17:34.659 "dma_device_id": "system", 00:17:34.659 "dma_device_type": 1 00:17:34.659 }, 00:17:34.659 { 00:17:34.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.659 "dma_device_type": 2 00:17:34.659 } 00:17:34.659 ], 00:17:34.659 "driver_specific": {} 00:17:34.659 } 00:17:34.659 ] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.659 BaseBdev4 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.659 [ 00:17:34.659 { 00:17:34.659 "name": "BaseBdev4", 00:17:34.659 "aliases": [ 00:17:34.659 "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733" 00:17:34.659 ], 00:17:34.659 "product_name": "Malloc disk", 00:17:34.659 "block_size": 512, 00:17:34.659 "num_blocks": 65536, 00:17:34.659 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:34.659 "assigned_rate_limits": { 00:17:34.659 "rw_ios_per_sec": 0, 00:17:34.659 "rw_mbytes_per_sec": 0, 00:17:34.659 "r_mbytes_per_sec": 0, 00:17:34.659 "w_mbytes_per_sec": 0 00:17:34.659 }, 00:17:34.659 "claimed": false, 00:17:34.659 "zoned": false, 00:17:34.659 "supported_io_types": { 00:17:34.659 "read": true, 00:17:34.659 "write": true, 00:17:34.659 "unmap": true, 00:17:34.659 "flush": true, 00:17:34.659 "reset": true, 00:17:34.659 "nvme_admin": false, 00:17:34.659 "nvme_io": false, 00:17:34.659 "nvme_io_md": false, 00:17:34.659 "write_zeroes": true, 00:17:34.659 "zcopy": true, 00:17:34.659 "get_zone_info": false, 00:17:34.659 "zone_management": false, 00:17:34.659 "zone_append": false, 00:17:34.659 "compare": false, 00:17:34.659 "compare_and_write": false, 00:17:34.659 "abort": true, 00:17:34.659 "seek_hole": false, 00:17:34.659 "seek_data": false, 00:17:34.659 "copy": true, 00:17:34.659 "nvme_iov_md": false 00:17:34.659 }, 00:17:34.659 "memory_domains": [ 00:17:34.659 { 00:17:34.659 "dma_device_id": "system", 00:17:34.659 "dma_device_type": 1 00:17:34.659 }, 00:17:34.659 { 00:17:34.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.659 "dma_device_type": 2 00:17:34.659 } 00:17:34.659 ], 00:17:34.659 "driver_specific": {} 00:17:34.659 } 00:17:34.659 ] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.659 [2024-11-05 03:28:48.222532] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.659 [2024-11-05 03:28:48.222765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.659 [2024-11-05 03:28:48.222906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.659 [2024-11-05 03:28:48.225378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.659 [2024-11-05 03:28:48.225582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.659 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.659 "name": "Existed_Raid", 00:17:34.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.659 "strip_size_kb": 64, 00:17:34.659 "state": "configuring", 00:17:34.659 "raid_level": "raid5f", 00:17:34.659 "superblock": false, 00:17:34.659 "num_base_bdevs": 4, 00:17:34.659 "num_base_bdevs_discovered": 3, 00:17:34.659 "num_base_bdevs_operational": 4, 00:17:34.659 "base_bdevs_list": [ 00:17:34.659 { 00:17:34.659 "name": "BaseBdev1", 00:17:34.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.659 "is_configured": false, 00:17:34.659 "data_offset": 0, 00:17:34.659 "data_size": 0 00:17:34.659 }, 00:17:34.659 { 00:17:34.659 "name": "BaseBdev2", 00:17:34.659 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:34.659 "is_configured": true, 00:17:34.659 "data_offset": 0, 00:17:34.659 "data_size": 65536 00:17:34.659 }, 00:17:34.659 { 00:17:34.659 "name": "BaseBdev3", 00:17:34.659 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:34.659 "is_configured": true, 00:17:34.659 "data_offset": 0, 00:17:34.659 "data_size": 65536 00:17:34.659 }, 00:17:34.659 { 00:17:34.659 "name": "BaseBdev4", 00:17:34.659 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:34.659 "is_configured": true, 00:17:34.659 "data_offset": 0, 00:17:34.659 "data_size": 65536 00:17:34.659 } 00:17:34.659 ] 00:17:34.660 }' 00:17:34.660 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.660 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.228 [2024-11-05 03:28:48.770719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.228 "name": "Existed_Raid", 00:17:35.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.228 "strip_size_kb": 64, 00:17:35.228 "state": "configuring", 00:17:35.228 "raid_level": "raid5f", 00:17:35.228 "superblock": false, 00:17:35.228 "num_base_bdevs": 4, 00:17:35.228 "num_base_bdevs_discovered": 2, 00:17:35.228 "num_base_bdevs_operational": 4, 00:17:35.228 "base_bdevs_list": [ 00:17:35.228 { 00:17:35.228 "name": "BaseBdev1", 00:17:35.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.228 "is_configured": false, 00:17:35.228 "data_offset": 0, 00:17:35.228 "data_size": 0 00:17:35.228 }, 00:17:35.228 { 00:17:35.228 "name": null, 00:17:35.228 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:35.228 "is_configured": false, 00:17:35.228 "data_offset": 0, 00:17:35.228 "data_size": 65536 00:17:35.228 }, 00:17:35.228 { 00:17:35.228 "name": "BaseBdev3", 00:17:35.228 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:35.228 "is_configured": true, 00:17:35.228 "data_offset": 0, 00:17:35.228 "data_size": 65536 00:17:35.228 }, 00:17:35.228 { 00:17:35.228 "name": "BaseBdev4", 00:17:35.228 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:35.228 "is_configured": true, 00:17:35.228 "data_offset": 0, 00:17:35.228 "data_size": 65536 00:17:35.228 } 00:17:35.228 ] 00:17:35.228 }' 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.228 03:28:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.796 [2024-11-05 03:28:49.412885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.796 BaseBdev1 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.796 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.797 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.797 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.115 [ 00:17:36.115 { 00:17:36.115 "name": "BaseBdev1", 00:17:36.115 "aliases": [ 00:17:36.115 "a54dad2c-55c1-4852-b138-c89a92df6ec6" 00:17:36.115 ], 00:17:36.115 "product_name": "Malloc disk", 00:17:36.115 "block_size": 512, 00:17:36.115 "num_blocks": 65536, 00:17:36.115 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:36.115 "assigned_rate_limits": { 00:17:36.115 "rw_ios_per_sec": 0, 00:17:36.115 "rw_mbytes_per_sec": 0, 00:17:36.115 "r_mbytes_per_sec": 0, 00:17:36.115 "w_mbytes_per_sec": 0 00:17:36.115 }, 00:17:36.115 "claimed": true, 00:17:36.115 "claim_type": "exclusive_write", 00:17:36.115 "zoned": false, 00:17:36.115 "supported_io_types": { 00:17:36.115 "read": true, 00:17:36.115 "write": true, 00:17:36.115 "unmap": true, 00:17:36.115 "flush": true, 00:17:36.115 "reset": true, 00:17:36.115 "nvme_admin": false, 00:17:36.115 "nvme_io": false, 00:17:36.115 "nvme_io_md": false, 00:17:36.115 "write_zeroes": true, 00:17:36.115 "zcopy": true, 00:17:36.115 "get_zone_info": false, 00:17:36.115 "zone_management": false, 00:17:36.115 "zone_append": false, 00:17:36.115 "compare": false, 00:17:36.115 "compare_and_write": false, 00:17:36.115 "abort": true, 00:17:36.115 "seek_hole": false, 00:17:36.115 "seek_data": false, 00:17:36.115 "copy": true, 00:17:36.115 "nvme_iov_md": false 00:17:36.115 }, 00:17:36.115 "memory_domains": [ 00:17:36.115 { 00:17:36.115 "dma_device_id": "system", 00:17:36.115 "dma_device_type": 1 00:17:36.115 }, 00:17:36.115 { 00:17:36.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.115 "dma_device_type": 2 00:17:36.115 } 00:17:36.115 ], 00:17:36.115 "driver_specific": {} 00:17:36.115 } 00:17:36.115 ] 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.115 "name": "Existed_Raid", 00:17:36.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.115 "strip_size_kb": 64, 00:17:36.115 "state": "configuring", 00:17:36.115 "raid_level": "raid5f", 00:17:36.115 "superblock": false, 00:17:36.115 "num_base_bdevs": 4, 00:17:36.115 "num_base_bdevs_discovered": 3, 00:17:36.115 "num_base_bdevs_operational": 4, 00:17:36.115 "base_bdevs_list": [ 00:17:36.115 { 00:17:36.115 "name": "BaseBdev1", 00:17:36.115 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:36.115 "is_configured": true, 00:17:36.115 "data_offset": 0, 00:17:36.115 "data_size": 65536 00:17:36.115 }, 00:17:36.115 { 00:17:36.115 "name": null, 00:17:36.115 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:36.115 "is_configured": false, 00:17:36.115 "data_offset": 0, 00:17:36.115 "data_size": 65536 00:17:36.115 }, 00:17:36.115 { 00:17:36.115 "name": "BaseBdev3", 00:17:36.115 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:36.115 "is_configured": true, 00:17:36.115 "data_offset": 0, 00:17:36.115 "data_size": 65536 00:17:36.115 }, 00:17:36.115 { 00:17:36.115 "name": "BaseBdev4", 00:17:36.115 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:36.115 "is_configured": true, 00:17:36.115 "data_offset": 0, 00:17:36.115 "data_size": 65536 00:17:36.115 } 00:17:36.115 ] 00:17:36.115 }' 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.115 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.374 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.374 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.374 03:28:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.374 03:28:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.633 [2024-11-05 03:28:50.053120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.633 "name": "Existed_Raid", 00:17:36.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.633 "strip_size_kb": 64, 00:17:36.633 "state": "configuring", 00:17:36.633 "raid_level": "raid5f", 00:17:36.633 "superblock": false, 00:17:36.633 "num_base_bdevs": 4, 00:17:36.633 "num_base_bdevs_discovered": 2, 00:17:36.633 "num_base_bdevs_operational": 4, 00:17:36.633 "base_bdevs_list": [ 00:17:36.633 { 00:17:36.633 "name": "BaseBdev1", 00:17:36.633 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:36.633 "is_configured": true, 00:17:36.633 "data_offset": 0, 00:17:36.633 "data_size": 65536 00:17:36.633 }, 00:17:36.633 { 00:17:36.633 "name": null, 00:17:36.633 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:36.633 "is_configured": false, 00:17:36.633 "data_offset": 0, 00:17:36.633 "data_size": 65536 00:17:36.633 }, 00:17:36.633 { 00:17:36.633 "name": null, 00:17:36.633 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:36.633 "is_configured": false, 00:17:36.633 "data_offset": 0, 00:17:36.633 "data_size": 65536 00:17:36.633 }, 00:17:36.633 { 00:17:36.633 "name": "BaseBdev4", 00:17:36.633 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:36.633 "is_configured": true, 00:17:36.633 "data_offset": 0, 00:17:36.633 "data_size": 65536 00:17:36.633 } 00:17:36.633 ] 00:17:36.633 }' 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.633 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.200 [2024-11-05 03:28:50.653307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.200 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.201 "name": "Existed_Raid", 00:17:37.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.201 "strip_size_kb": 64, 00:17:37.201 "state": "configuring", 00:17:37.201 "raid_level": "raid5f", 00:17:37.201 "superblock": false, 00:17:37.201 "num_base_bdevs": 4, 00:17:37.201 "num_base_bdevs_discovered": 3, 00:17:37.201 "num_base_bdevs_operational": 4, 00:17:37.201 "base_bdevs_list": [ 00:17:37.201 { 00:17:37.201 "name": "BaseBdev1", 00:17:37.201 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:37.201 "is_configured": true, 00:17:37.201 "data_offset": 0, 00:17:37.201 "data_size": 65536 00:17:37.201 }, 00:17:37.201 { 00:17:37.201 "name": null, 00:17:37.201 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:37.201 "is_configured": false, 00:17:37.201 "data_offset": 0, 00:17:37.201 "data_size": 65536 00:17:37.201 }, 00:17:37.201 { 00:17:37.201 "name": "BaseBdev3", 00:17:37.201 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:37.201 "is_configured": true, 00:17:37.201 "data_offset": 0, 00:17:37.201 "data_size": 65536 00:17:37.201 }, 00:17:37.201 { 00:17:37.201 "name": "BaseBdev4", 00:17:37.201 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:37.201 "is_configured": true, 00:17:37.201 "data_offset": 0, 00:17:37.201 "data_size": 65536 00:17:37.201 } 00:17:37.201 ] 00:17:37.201 }' 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.201 03:28:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.768 [2024-11-05 03:28:51.265507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.768 "name": "Existed_Raid", 00:17:37.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.768 "strip_size_kb": 64, 00:17:37.768 "state": "configuring", 00:17:37.768 "raid_level": "raid5f", 00:17:37.768 "superblock": false, 00:17:37.768 "num_base_bdevs": 4, 00:17:37.768 "num_base_bdevs_discovered": 2, 00:17:37.768 "num_base_bdevs_operational": 4, 00:17:37.768 "base_bdevs_list": [ 00:17:37.768 { 00:17:37.768 "name": null, 00:17:37.768 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:37.768 "is_configured": false, 00:17:37.768 "data_offset": 0, 00:17:37.768 "data_size": 65536 00:17:37.768 }, 00:17:37.768 { 00:17:37.768 "name": null, 00:17:37.768 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:37.768 "is_configured": false, 00:17:37.768 "data_offset": 0, 00:17:37.768 "data_size": 65536 00:17:37.768 }, 00:17:37.768 { 00:17:37.768 "name": "BaseBdev3", 00:17:37.768 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:37.768 "is_configured": true, 00:17:37.768 "data_offset": 0, 00:17:37.768 "data_size": 65536 00:17:37.768 }, 00:17:37.768 { 00:17:37.768 "name": "BaseBdev4", 00:17:37.768 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:37.768 "is_configured": true, 00:17:37.768 "data_offset": 0, 00:17:37.768 "data_size": 65536 00:17:37.768 } 00:17:37.768 ] 00:17:37.768 }' 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.768 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.335 [2024-11-05 03:28:51.938567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.335 03:28:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.593 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.593 "name": "Existed_Raid", 00:17:38.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.593 "strip_size_kb": 64, 00:17:38.593 "state": "configuring", 00:17:38.593 "raid_level": "raid5f", 00:17:38.593 "superblock": false, 00:17:38.593 "num_base_bdevs": 4, 00:17:38.593 "num_base_bdevs_discovered": 3, 00:17:38.593 "num_base_bdevs_operational": 4, 00:17:38.593 "base_bdevs_list": [ 00:17:38.593 { 00:17:38.593 "name": null, 00:17:38.593 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:38.593 "is_configured": false, 00:17:38.594 "data_offset": 0, 00:17:38.594 "data_size": 65536 00:17:38.594 }, 00:17:38.594 { 00:17:38.594 "name": "BaseBdev2", 00:17:38.594 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:38.594 "is_configured": true, 00:17:38.594 "data_offset": 0, 00:17:38.594 "data_size": 65536 00:17:38.594 }, 00:17:38.594 { 00:17:38.594 "name": "BaseBdev3", 00:17:38.594 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:38.594 "is_configured": true, 00:17:38.594 "data_offset": 0, 00:17:38.594 "data_size": 65536 00:17:38.594 }, 00:17:38.594 { 00:17:38.594 "name": "BaseBdev4", 00:17:38.594 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:38.594 "is_configured": true, 00:17:38.594 "data_offset": 0, 00:17:38.594 "data_size": 65536 00:17:38.594 } 00:17:38.594 ] 00:17:38.594 }' 00:17:38.594 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.594 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.852 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.852 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.853 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:38.853 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a54dad2c-55c1-4852-b138-c89a92df6ec6 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.112 [2024-11-05 03:28:52.622780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:39.112 [2024-11-05 03:28:52.623047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:39.112 [2024-11-05 03:28:52.623069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:39.112 [2024-11-05 03:28:52.623445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:39.112 [2024-11-05 03:28:52.629337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:39.112 [2024-11-05 03:28:52.629513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:39.112 [2024-11-05 03:28:52.629964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.112 NewBaseBdev 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.112 [ 00:17:39.112 { 00:17:39.112 "name": "NewBaseBdev", 00:17:39.112 "aliases": [ 00:17:39.112 "a54dad2c-55c1-4852-b138-c89a92df6ec6" 00:17:39.112 ], 00:17:39.112 "product_name": "Malloc disk", 00:17:39.112 "block_size": 512, 00:17:39.112 "num_blocks": 65536, 00:17:39.112 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:39.112 "assigned_rate_limits": { 00:17:39.112 "rw_ios_per_sec": 0, 00:17:39.112 "rw_mbytes_per_sec": 0, 00:17:39.112 "r_mbytes_per_sec": 0, 00:17:39.112 "w_mbytes_per_sec": 0 00:17:39.112 }, 00:17:39.112 "claimed": true, 00:17:39.112 "claim_type": "exclusive_write", 00:17:39.112 "zoned": false, 00:17:39.112 "supported_io_types": { 00:17:39.112 "read": true, 00:17:39.112 "write": true, 00:17:39.112 "unmap": true, 00:17:39.112 "flush": true, 00:17:39.112 "reset": true, 00:17:39.112 "nvme_admin": false, 00:17:39.112 "nvme_io": false, 00:17:39.112 "nvme_io_md": false, 00:17:39.112 "write_zeroes": true, 00:17:39.112 "zcopy": true, 00:17:39.112 "get_zone_info": false, 00:17:39.112 "zone_management": false, 00:17:39.112 "zone_append": false, 00:17:39.112 "compare": false, 00:17:39.112 "compare_and_write": false, 00:17:39.112 "abort": true, 00:17:39.112 "seek_hole": false, 00:17:39.112 "seek_data": false, 00:17:39.112 "copy": true, 00:17:39.112 "nvme_iov_md": false 00:17:39.112 }, 00:17:39.112 "memory_domains": [ 00:17:39.112 { 00:17:39.112 "dma_device_id": "system", 00:17:39.112 "dma_device_type": 1 00:17:39.112 }, 00:17:39.112 { 00:17:39.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.112 "dma_device_type": 2 00:17:39.112 } 00:17:39.112 ], 00:17:39.112 "driver_specific": {} 00:17:39.112 } 00:17:39.112 ] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.112 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.113 "name": "Existed_Raid", 00:17:39.113 "uuid": "06781a2e-cf4d-4a8a-a9a8-5e8ac9e82810", 00:17:39.113 "strip_size_kb": 64, 00:17:39.113 "state": "online", 00:17:39.113 "raid_level": "raid5f", 00:17:39.113 "superblock": false, 00:17:39.113 "num_base_bdevs": 4, 00:17:39.113 "num_base_bdevs_discovered": 4, 00:17:39.113 "num_base_bdevs_operational": 4, 00:17:39.113 "base_bdevs_list": [ 00:17:39.113 { 00:17:39.113 "name": "NewBaseBdev", 00:17:39.113 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:39.113 "is_configured": true, 00:17:39.113 "data_offset": 0, 00:17:39.113 "data_size": 65536 00:17:39.113 }, 00:17:39.113 { 00:17:39.113 "name": "BaseBdev2", 00:17:39.113 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:39.113 "is_configured": true, 00:17:39.113 "data_offset": 0, 00:17:39.113 "data_size": 65536 00:17:39.113 }, 00:17:39.113 { 00:17:39.113 "name": "BaseBdev3", 00:17:39.113 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:39.113 "is_configured": true, 00:17:39.113 "data_offset": 0, 00:17:39.113 "data_size": 65536 00:17:39.113 }, 00:17:39.113 { 00:17:39.113 "name": "BaseBdev4", 00:17:39.113 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:39.113 "is_configured": true, 00:17:39.113 "data_offset": 0, 00:17:39.113 "data_size": 65536 00:17:39.113 } 00:17:39.113 ] 00:17:39.113 }' 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.113 03:28:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.680 [2024-11-05 03:28:53.222024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.680 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.680 "name": "Existed_Raid", 00:17:39.680 "aliases": [ 00:17:39.680 "06781a2e-cf4d-4a8a-a9a8-5e8ac9e82810" 00:17:39.680 ], 00:17:39.680 "product_name": "Raid Volume", 00:17:39.680 "block_size": 512, 00:17:39.680 "num_blocks": 196608, 00:17:39.680 "uuid": "06781a2e-cf4d-4a8a-a9a8-5e8ac9e82810", 00:17:39.680 "assigned_rate_limits": { 00:17:39.680 "rw_ios_per_sec": 0, 00:17:39.680 "rw_mbytes_per_sec": 0, 00:17:39.680 "r_mbytes_per_sec": 0, 00:17:39.680 "w_mbytes_per_sec": 0 00:17:39.680 }, 00:17:39.680 "claimed": false, 00:17:39.680 "zoned": false, 00:17:39.680 "supported_io_types": { 00:17:39.680 "read": true, 00:17:39.680 "write": true, 00:17:39.680 "unmap": false, 00:17:39.680 "flush": false, 00:17:39.680 "reset": true, 00:17:39.680 "nvme_admin": false, 00:17:39.680 "nvme_io": false, 00:17:39.680 "nvme_io_md": false, 00:17:39.680 "write_zeroes": true, 00:17:39.680 "zcopy": false, 00:17:39.680 "get_zone_info": false, 00:17:39.680 "zone_management": false, 00:17:39.680 "zone_append": false, 00:17:39.680 "compare": false, 00:17:39.680 "compare_and_write": false, 00:17:39.680 "abort": false, 00:17:39.680 "seek_hole": false, 00:17:39.680 "seek_data": false, 00:17:39.680 "copy": false, 00:17:39.680 "nvme_iov_md": false 00:17:39.680 }, 00:17:39.680 "driver_specific": { 00:17:39.680 "raid": { 00:17:39.680 "uuid": "06781a2e-cf4d-4a8a-a9a8-5e8ac9e82810", 00:17:39.681 "strip_size_kb": 64, 00:17:39.681 "state": "online", 00:17:39.681 "raid_level": "raid5f", 00:17:39.681 "superblock": false, 00:17:39.681 "num_base_bdevs": 4, 00:17:39.681 "num_base_bdevs_discovered": 4, 00:17:39.681 "num_base_bdevs_operational": 4, 00:17:39.681 "base_bdevs_list": [ 00:17:39.681 { 00:17:39.681 "name": "NewBaseBdev", 00:17:39.681 "uuid": "a54dad2c-55c1-4852-b138-c89a92df6ec6", 00:17:39.681 "is_configured": true, 00:17:39.681 "data_offset": 0, 00:17:39.681 "data_size": 65536 00:17:39.681 }, 00:17:39.681 { 00:17:39.681 "name": "BaseBdev2", 00:17:39.681 "uuid": "a9bd3ef7-90b5-468d-81f3-fdd893570abf", 00:17:39.681 "is_configured": true, 00:17:39.681 "data_offset": 0, 00:17:39.681 "data_size": 65536 00:17:39.681 }, 00:17:39.681 { 00:17:39.681 "name": "BaseBdev3", 00:17:39.681 "uuid": "15721f58-96db-4a71-b48b-9650a0a1b56a", 00:17:39.681 "is_configured": true, 00:17:39.681 "data_offset": 0, 00:17:39.681 "data_size": 65536 00:17:39.681 }, 00:17:39.681 { 00:17:39.681 "name": "BaseBdev4", 00:17:39.681 "uuid": "54d4e5b9-93bc-4ffa-a2a5-88ad1ddde733", 00:17:39.681 "is_configured": true, 00:17:39.681 "data_offset": 0, 00:17:39.681 "data_size": 65536 00:17:39.681 } 00:17:39.681 ] 00:17:39.681 } 00:17:39.681 } 00:17:39.681 }' 00:17:39.681 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:39.940 BaseBdev2 00:17:39.940 BaseBdev3 00:17:39.940 BaseBdev4' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.940 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.199 [2024-11-05 03:28:53.621828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.199 [2024-11-05 03:28:53.621867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.199 [2024-11-05 03:28:53.621963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.199 [2024-11-05 03:28:53.622340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.199 [2024-11-05 03:28:53.622360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82910 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 82910 ']' 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 82910 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82910 00:17:40.199 killing process with pid 82910 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82910' 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 82910 00:17:40.199 [2024-11-05 03:28:53.662525] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.199 03:28:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 82910 00:17:40.458 [2024-11-05 03:28:54.027150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:41.832 00:17:41.832 real 0m13.330s 00:17:41.832 user 0m22.040s 00:17:41.832 sys 0m1.977s 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:41.832 ************************************ 00:17:41.832 END TEST raid5f_state_function_test 00:17:41.832 ************************************ 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.832 03:28:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:41.832 03:28:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:41.832 03:28:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:41.832 03:28:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.832 ************************************ 00:17:41.832 START TEST raid5f_state_function_test_sb 00:17:41.832 ************************************ 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83600 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83600' 00:17:41.832 Process raid pid: 83600 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83600 00:17:41.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.832 03:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83600 ']' 00:17:41.833 03:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.833 03:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:41.833 03:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.833 03:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:41.833 03:28:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.833 [2024-11-05 03:28:55.300162] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:17:41.833 [2024-11-05 03:28:55.300381] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.092 [2024-11-05 03:28:55.498986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.092 [2024-11-05 03:28:55.657088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.351 [2024-11-05 03:28:55.911858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.351 [2024-11-05 03:28:55.911916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.918 [2024-11-05 03:28:56.340118] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.918 [2024-11-05 03:28:56.340195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.918 [2024-11-05 03:28:56.340214] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.918 [2024-11-05 03:28:56.340230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.918 [2024-11-05 03:28:56.340240] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.918 [2024-11-05 03:28:56.340254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.918 [2024-11-05 03:28:56.340264] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.918 [2024-11-05 03:28:56.340278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.918 "name": "Existed_Raid", 00:17:42.918 "uuid": "a4550e87-23d5-445b-96e1-74790a35835a", 00:17:42.918 "strip_size_kb": 64, 00:17:42.918 "state": "configuring", 00:17:42.918 "raid_level": "raid5f", 00:17:42.918 "superblock": true, 00:17:42.918 "num_base_bdevs": 4, 00:17:42.918 "num_base_bdevs_discovered": 0, 00:17:42.918 "num_base_bdevs_operational": 4, 00:17:42.918 "base_bdevs_list": [ 00:17:42.918 { 00:17:42.918 "name": "BaseBdev1", 00:17:42.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.918 "is_configured": false, 00:17:42.918 "data_offset": 0, 00:17:42.918 "data_size": 0 00:17:42.918 }, 00:17:42.918 { 00:17:42.918 "name": "BaseBdev2", 00:17:42.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.918 "is_configured": false, 00:17:42.918 "data_offset": 0, 00:17:42.918 "data_size": 0 00:17:42.918 }, 00:17:42.918 { 00:17:42.918 "name": "BaseBdev3", 00:17:42.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.918 "is_configured": false, 00:17:42.918 "data_offset": 0, 00:17:42.918 "data_size": 0 00:17:42.918 }, 00:17:42.918 { 00:17:42.918 "name": "BaseBdev4", 00:17:42.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.918 "is_configured": false, 00:17:42.918 "data_offset": 0, 00:17:42.918 "data_size": 0 00:17:42.918 } 00:17:42.918 ] 00:17:42.918 }' 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.918 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.496 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.496 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.496 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.496 [2024-11-05 03:28:56.856171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.496 [2024-11-05 03:28:56.856213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:43.496 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.497 [2024-11-05 03:28:56.864160] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.497 [2024-11-05 03:28:56.864224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.497 [2024-11-05 03:28:56.864255] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.497 [2024-11-05 03:28:56.864287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.497 [2024-11-05 03:28:56.864297] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:43.497 [2024-11-05 03:28:56.864311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:43.497 [2024-11-05 03:28:56.864321] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:43.497 [2024-11-05 03:28:56.864335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.497 [2024-11-05 03:28:56.908651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.497 BaseBdev1 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.497 [ 00:17:43.497 { 00:17:43.497 "name": "BaseBdev1", 00:17:43.497 "aliases": [ 00:17:43.497 "f18ced38-634d-4295-9a3b-927fa31be1de" 00:17:43.497 ], 00:17:43.497 "product_name": "Malloc disk", 00:17:43.497 "block_size": 512, 00:17:43.497 "num_blocks": 65536, 00:17:43.497 "uuid": "f18ced38-634d-4295-9a3b-927fa31be1de", 00:17:43.497 "assigned_rate_limits": { 00:17:43.497 "rw_ios_per_sec": 0, 00:17:43.497 "rw_mbytes_per_sec": 0, 00:17:43.497 "r_mbytes_per_sec": 0, 00:17:43.497 "w_mbytes_per_sec": 0 00:17:43.497 }, 00:17:43.497 "claimed": true, 00:17:43.497 "claim_type": "exclusive_write", 00:17:43.497 "zoned": false, 00:17:43.497 "supported_io_types": { 00:17:43.497 "read": true, 00:17:43.497 "write": true, 00:17:43.497 "unmap": true, 00:17:43.497 "flush": true, 00:17:43.497 "reset": true, 00:17:43.497 "nvme_admin": false, 00:17:43.497 "nvme_io": false, 00:17:43.497 "nvme_io_md": false, 00:17:43.497 "write_zeroes": true, 00:17:43.497 "zcopy": true, 00:17:43.497 "get_zone_info": false, 00:17:43.497 "zone_management": false, 00:17:43.497 "zone_append": false, 00:17:43.497 "compare": false, 00:17:43.497 "compare_and_write": false, 00:17:43.497 "abort": true, 00:17:43.497 "seek_hole": false, 00:17:43.497 "seek_data": false, 00:17:43.497 "copy": true, 00:17:43.497 "nvme_iov_md": false 00:17:43.497 }, 00:17:43.497 "memory_domains": [ 00:17:43.497 { 00:17:43.497 "dma_device_id": "system", 00:17:43.497 "dma_device_type": 1 00:17:43.497 }, 00:17:43.497 { 00:17:43.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.497 "dma_device_type": 2 00:17:43.497 } 00:17:43.497 ], 00:17:43.497 "driver_specific": {} 00:17:43.497 } 00:17:43.497 ] 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.497 03:28:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.497 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.497 "name": "Existed_Raid", 00:17:43.497 "uuid": "3ae5d935-1d16-49c2-bd33-8a88ebc8654b", 00:17:43.497 "strip_size_kb": 64, 00:17:43.497 "state": "configuring", 00:17:43.497 "raid_level": "raid5f", 00:17:43.497 "superblock": true, 00:17:43.497 "num_base_bdevs": 4, 00:17:43.497 "num_base_bdevs_discovered": 1, 00:17:43.497 "num_base_bdevs_operational": 4, 00:17:43.497 "base_bdevs_list": [ 00:17:43.497 { 00:17:43.497 "name": "BaseBdev1", 00:17:43.497 "uuid": "f18ced38-634d-4295-9a3b-927fa31be1de", 00:17:43.497 "is_configured": true, 00:17:43.497 "data_offset": 2048, 00:17:43.497 "data_size": 63488 00:17:43.497 }, 00:17:43.497 { 00:17:43.497 "name": "BaseBdev2", 00:17:43.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.497 "is_configured": false, 00:17:43.497 "data_offset": 0, 00:17:43.497 "data_size": 0 00:17:43.497 }, 00:17:43.497 { 00:17:43.497 "name": "BaseBdev3", 00:17:43.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.497 "is_configured": false, 00:17:43.497 "data_offset": 0, 00:17:43.497 "data_size": 0 00:17:43.497 }, 00:17:43.497 { 00:17:43.497 "name": "BaseBdev4", 00:17:43.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.497 "is_configured": false, 00:17:43.497 "data_offset": 0, 00:17:43.497 "data_size": 0 00:17:43.497 } 00:17:43.497 ] 00:17:43.497 }' 00:17:43.497 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.497 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.065 [2024-11-05 03:28:57.520954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.065 [2024-11-05 03:28:57.521187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.065 [2024-11-05 03:28:57.529023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.065 [2024-11-05 03:28:57.531574] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.065 [2024-11-05 03:28:57.531817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.065 [2024-11-05 03:28:57.531844] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.065 [2024-11-05 03:28:57.531864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.065 [2024-11-05 03:28:57.531875] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.065 [2024-11-05 03:28:57.531889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.065 "name": "Existed_Raid", 00:17:44.065 "uuid": "d7aef115-09d3-4f17-8014-9aa565c5024b", 00:17:44.065 "strip_size_kb": 64, 00:17:44.065 "state": "configuring", 00:17:44.065 "raid_level": "raid5f", 00:17:44.065 "superblock": true, 00:17:44.065 "num_base_bdevs": 4, 00:17:44.065 "num_base_bdevs_discovered": 1, 00:17:44.065 "num_base_bdevs_operational": 4, 00:17:44.065 "base_bdevs_list": [ 00:17:44.065 { 00:17:44.065 "name": "BaseBdev1", 00:17:44.065 "uuid": "f18ced38-634d-4295-9a3b-927fa31be1de", 00:17:44.065 "is_configured": true, 00:17:44.065 "data_offset": 2048, 00:17:44.065 "data_size": 63488 00:17:44.065 }, 00:17:44.065 { 00:17:44.065 "name": "BaseBdev2", 00:17:44.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.065 "is_configured": false, 00:17:44.065 "data_offset": 0, 00:17:44.065 "data_size": 0 00:17:44.065 }, 00:17:44.065 { 00:17:44.065 "name": "BaseBdev3", 00:17:44.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.065 "is_configured": false, 00:17:44.065 "data_offset": 0, 00:17:44.065 "data_size": 0 00:17:44.065 }, 00:17:44.065 { 00:17:44.065 "name": "BaseBdev4", 00:17:44.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.065 "is_configured": false, 00:17:44.065 "data_offset": 0, 00:17:44.065 "data_size": 0 00:17:44.065 } 00:17:44.065 ] 00:17:44.065 }' 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.065 03:28:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 [2024-11-05 03:28:58.116685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.633 BaseBdev2 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 [ 00:17:44.633 { 00:17:44.633 "name": "BaseBdev2", 00:17:44.633 "aliases": [ 00:17:44.633 "43428166-c87a-4137-8561-8ee4b0c131fc" 00:17:44.633 ], 00:17:44.633 "product_name": "Malloc disk", 00:17:44.633 "block_size": 512, 00:17:44.633 "num_blocks": 65536, 00:17:44.633 "uuid": "43428166-c87a-4137-8561-8ee4b0c131fc", 00:17:44.633 "assigned_rate_limits": { 00:17:44.633 "rw_ios_per_sec": 0, 00:17:44.633 "rw_mbytes_per_sec": 0, 00:17:44.633 "r_mbytes_per_sec": 0, 00:17:44.633 "w_mbytes_per_sec": 0 00:17:44.633 }, 00:17:44.633 "claimed": true, 00:17:44.633 "claim_type": "exclusive_write", 00:17:44.633 "zoned": false, 00:17:44.633 "supported_io_types": { 00:17:44.633 "read": true, 00:17:44.633 "write": true, 00:17:44.633 "unmap": true, 00:17:44.633 "flush": true, 00:17:44.633 "reset": true, 00:17:44.633 "nvme_admin": false, 00:17:44.633 "nvme_io": false, 00:17:44.633 "nvme_io_md": false, 00:17:44.633 "write_zeroes": true, 00:17:44.633 "zcopy": true, 00:17:44.633 "get_zone_info": false, 00:17:44.633 "zone_management": false, 00:17:44.633 "zone_append": false, 00:17:44.633 "compare": false, 00:17:44.633 "compare_and_write": false, 00:17:44.633 "abort": true, 00:17:44.633 "seek_hole": false, 00:17:44.633 "seek_data": false, 00:17:44.633 "copy": true, 00:17:44.633 "nvme_iov_md": false 00:17:44.633 }, 00:17:44.633 "memory_domains": [ 00:17:44.633 { 00:17:44.633 "dma_device_id": "system", 00:17:44.633 "dma_device_type": 1 00:17:44.633 }, 00:17:44.633 { 00:17:44.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.633 "dma_device_type": 2 00:17:44.633 } 00:17:44.633 ], 00:17:44.633 "driver_specific": {} 00:17:44.633 } 00:17:44.633 ] 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.633 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.634 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.634 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.634 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.634 "name": "Existed_Raid", 00:17:44.634 "uuid": "d7aef115-09d3-4f17-8014-9aa565c5024b", 00:17:44.634 "strip_size_kb": 64, 00:17:44.634 "state": "configuring", 00:17:44.634 "raid_level": "raid5f", 00:17:44.634 "superblock": true, 00:17:44.634 "num_base_bdevs": 4, 00:17:44.634 "num_base_bdevs_discovered": 2, 00:17:44.634 "num_base_bdevs_operational": 4, 00:17:44.634 "base_bdevs_list": [ 00:17:44.634 { 00:17:44.634 "name": "BaseBdev1", 00:17:44.634 "uuid": "f18ced38-634d-4295-9a3b-927fa31be1de", 00:17:44.634 "is_configured": true, 00:17:44.634 "data_offset": 2048, 00:17:44.634 "data_size": 63488 00:17:44.634 }, 00:17:44.634 { 00:17:44.634 "name": "BaseBdev2", 00:17:44.634 "uuid": "43428166-c87a-4137-8561-8ee4b0c131fc", 00:17:44.634 "is_configured": true, 00:17:44.634 "data_offset": 2048, 00:17:44.634 "data_size": 63488 00:17:44.634 }, 00:17:44.634 { 00:17:44.634 "name": "BaseBdev3", 00:17:44.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.634 "is_configured": false, 00:17:44.634 "data_offset": 0, 00:17:44.634 "data_size": 0 00:17:44.634 }, 00:17:44.634 { 00:17:44.634 "name": "BaseBdev4", 00:17:44.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.634 "is_configured": false, 00:17:44.634 "data_offset": 0, 00:17:44.634 "data_size": 0 00:17:44.634 } 00:17:44.634 ] 00:17:44.634 }' 00:17:44.634 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.634 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.201 [2024-11-05 03:28:58.739062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.201 BaseBdev3 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.201 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.202 [ 00:17:45.202 { 00:17:45.202 "name": "BaseBdev3", 00:17:45.202 "aliases": [ 00:17:45.202 "3756aad1-c78a-42fe-bd95-897fa546cfbe" 00:17:45.202 ], 00:17:45.202 "product_name": "Malloc disk", 00:17:45.202 "block_size": 512, 00:17:45.202 "num_blocks": 65536, 00:17:45.202 "uuid": "3756aad1-c78a-42fe-bd95-897fa546cfbe", 00:17:45.202 "assigned_rate_limits": { 00:17:45.202 "rw_ios_per_sec": 0, 00:17:45.202 "rw_mbytes_per_sec": 0, 00:17:45.202 "r_mbytes_per_sec": 0, 00:17:45.202 "w_mbytes_per_sec": 0 00:17:45.202 }, 00:17:45.202 "claimed": true, 00:17:45.202 "claim_type": "exclusive_write", 00:17:45.202 "zoned": false, 00:17:45.202 "supported_io_types": { 00:17:45.202 "read": true, 00:17:45.202 "write": true, 00:17:45.202 "unmap": true, 00:17:45.202 "flush": true, 00:17:45.202 "reset": true, 00:17:45.202 "nvme_admin": false, 00:17:45.202 "nvme_io": false, 00:17:45.202 "nvme_io_md": false, 00:17:45.202 "write_zeroes": true, 00:17:45.202 "zcopy": true, 00:17:45.202 "get_zone_info": false, 00:17:45.202 "zone_management": false, 00:17:45.202 "zone_append": false, 00:17:45.202 "compare": false, 00:17:45.202 "compare_and_write": false, 00:17:45.202 "abort": true, 00:17:45.202 "seek_hole": false, 00:17:45.202 "seek_data": false, 00:17:45.202 "copy": true, 00:17:45.202 "nvme_iov_md": false 00:17:45.202 }, 00:17:45.202 "memory_domains": [ 00:17:45.202 { 00:17:45.202 "dma_device_id": "system", 00:17:45.202 "dma_device_type": 1 00:17:45.202 }, 00:17:45.202 { 00:17:45.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.202 "dma_device_type": 2 00:17:45.202 } 00:17:45.202 ], 00:17:45.202 "driver_specific": {} 00:17:45.202 } 00:17:45.202 ] 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.202 "name": "Existed_Raid", 00:17:45.202 "uuid": "d7aef115-09d3-4f17-8014-9aa565c5024b", 00:17:45.202 "strip_size_kb": 64, 00:17:45.202 "state": "configuring", 00:17:45.202 "raid_level": "raid5f", 00:17:45.202 "superblock": true, 00:17:45.202 "num_base_bdevs": 4, 00:17:45.202 "num_base_bdevs_discovered": 3, 00:17:45.202 "num_base_bdevs_operational": 4, 00:17:45.202 "base_bdevs_list": [ 00:17:45.202 { 00:17:45.202 "name": "BaseBdev1", 00:17:45.202 "uuid": "f18ced38-634d-4295-9a3b-927fa31be1de", 00:17:45.202 "is_configured": true, 00:17:45.202 "data_offset": 2048, 00:17:45.202 "data_size": 63488 00:17:45.202 }, 00:17:45.202 { 00:17:45.202 "name": "BaseBdev2", 00:17:45.202 "uuid": "43428166-c87a-4137-8561-8ee4b0c131fc", 00:17:45.202 "is_configured": true, 00:17:45.202 "data_offset": 2048, 00:17:45.202 "data_size": 63488 00:17:45.202 }, 00:17:45.202 { 00:17:45.202 "name": "BaseBdev3", 00:17:45.202 "uuid": "3756aad1-c78a-42fe-bd95-897fa546cfbe", 00:17:45.202 "is_configured": true, 00:17:45.202 "data_offset": 2048, 00:17:45.202 "data_size": 63488 00:17:45.202 }, 00:17:45.202 { 00:17:45.202 "name": "BaseBdev4", 00:17:45.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.202 "is_configured": false, 00:17:45.202 "data_offset": 0, 00:17:45.202 "data_size": 0 00:17:45.202 } 00:17:45.202 ] 00:17:45.202 }' 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.202 03:28:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.770 [2024-11-05 03:28:59.325080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:45.770 [2024-11-05 03:28:59.325501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:45.770 [2024-11-05 03:28:59.325522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:45.770 [2024-11-05 03:28:59.325860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:45.770 BaseBdev4 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.770 [2024-11-05 03:28:59.332648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:45.770 [2024-11-05 03:28:59.332862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:45.770 [2024-11-05 03:28:59.333193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.770 [ 00:17:45.770 { 00:17:45.770 "name": "BaseBdev4", 00:17:45.770 "aliases": [ 00:17:45.770 "9cf82ea5-c273-4ef2-9ed6-e059676b7045" 00:17:45.770 ], 00:17:45.770 "product_name": "Malloc disk", 00:17:45.770 "block_size": 512, 00:17:45.770 "num_blocks": 65536, 00:17:45.770 "uuid": "9cf82ea5-c273-4ef2-9ed6-e059676b7045", 00:17:45.770 "assigned_rate_limits": { 00:17:45.770 "rw_ios_per_sec": 0, 00:17:45.770 "rw_mbytes_per_sec": 0, 00:17:45.770 "r_mbytes_per_sec": 0, 00:17:45.770 "w_mbytes_per_sec": 0 00:17:45.770 }, 00:17:45.770 "claimed": true, 00:17:45.770 "claim_type": "exclusive_write", 00:17:45.770 "zoned": false, 00:17:45.770 "supported_io_types": { 00:17:45.770 "read": true, 00:17:45.770 "write": true, 00:17:45.770 "unmap": true, 00:17:45.770 "flush": true, 00:17:45.770 "reset": true, 00:17:45.770 "nvme_admin": false, 00:17:45.770 "nvme_io": false, 00:17:45.770 "nvme_io_md": false, 00:17:45.770 "write_zeroes": true, 00:17:45.770 "zcopy": true, 00:17:45.770 "get_zone_info": false, 00:17:45.770 "zone_management": false, 00:17:45.770 "zone_append": false, 00:17:45.770 "compare": false, 00:17:45.770 "compare_and_write": false, 00:17:45.770 "abort": true, 00:17:45.770 "seek_hole": false, 00:17:45.770 "seek_data": false, 00:17:45.770 "copy": true, 00:17:45.770 "nvme_iov_md": false 00:17:45.770 }, 00:17:45.770 "memory_domains": [ 00:17:45.770 { 00:17:45.770 "dma_device_id": "system", 00:17:45.770 "dma_device_type": 1 00:17:45.770 }, 00:17:45.770 { 00:17:45.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.770 "dma_device_type": 2 00:17:45.770 } 00:17:45.770 ], 00:17:45.770 "driver_specific": {} 00:17:45.770 } 00:17:45.770 ] 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.770 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.029 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.029 "name": "Existed_Raid", 00:17:46.029 "uuid": "d7aef115-09d3-4f17-8014-9aa565c5024b", 00:17:46.029 "strip_size_kb": 64, 00:17:46.029 "state": "online", 00:17:46.029 "raid_level": "raid5f", 00:17:46.029 "superblock": true, 00:17:46.029 "num_base_bdevs": 4, 00:17:46.029 "num_base_bdevs_discovered": 4, 00:17:46.029 "num_base_bdevs_operational": 4, 00:17:46.029 "base_bdevs_list": [ 00:17:46.029 { 00:17:46.029 "name": "BaseBdev1", 00:17:46.029 "uuid": "f18ced38-634d-4295-9a3b-927fa31be1de", 00:17:46.029 "is_configured": true, 00:17:46.029 "data_offset": 2048, 00:17:46.029 "data_size": 63488 00:17:46.029 }, 00:17:46.029 { 00:17:46.029 "name": "BaseBdev2", 00:17:46.029 "uuid": "43428166-c87a-4137-8561-8ee4b0c131fc", 00:17:46.029 "is_configured": true, 00:17:46.029 "data_offset": 2048, 00:17:46.029 "data_size": 63488 00:17:46.029 }, 00:17:46.029 { 00:17:46.029 "name": "BaseBdev3", 00:17:46.029 "uuid": "3756aad1-c78a-42fe-bd95-897fa546cfbe", 00:17:46.029 "is_configured": true, 00:17:46.029 "data_offset": 2048, 00:17:46.029 "data_size": 63488 00:17:46.029 }, 00:17:46.029 { 00:17:46.029 "name": "BaseBdev4", 00:17:46.029 "uuid": "9cf82ea5-c273-4ef2-9ed6-e059676b7045", 00:17:46.029 "is_configured": true, 00:17:46.029 "data_offset": 2048, 00:17:46.029 "data_size": 63488 00:17:46.029 } 00:17:46.029 ] 00:17:46.029 }' 00:17:46.029 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.029 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.287 [2024-11-05 03:28:59.880945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.287 03:28:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.545 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.545 "name": "Existed_Raid", 00:17:46.545 "aliases": [ 00:17:46.545 "d7aef115-09d3-4f17-8014-9aa565c5024b" 00:17:46.545 ], 00:17:46.545 "product_name": "Raid Volume", 00:17:46.545 "block_size": 512, 00:17:46.545 "num_blocks": 190464, 00:17:46.545 "uuid": "d7aef115-09d3-4f17-8014-9aa565c5024b", 00:17:46.545 "assigned_rate_limits": { 00:17:46.545 "rw_ios_per_sec": 0, 00:17:46.545 "rw_mbytes_per_sec": 0, 00:17:46.545 "r_mbytes_per_sec": 0, 00:17:46.545 "w_mbytes_per_sec": 0 00:17:46.545 }, 00:17:46.545 "claimed": false, 00:17:46.545 "zoned": false, 00:17:46.545 "supported_io_types": { 00:17:46.545 "read": true, 00:17:46.545 "write": true, 00:17:46.545 "unmap": false, 00:17:46.545 "flush": false, 00:17:46.545 "reset": true, 00:17:46.545 "nvme_admin": false, 00:17:46.545 "nvme_io": false, 00:17:46.545 "nvme_io_md": false, 00:17:46.545 "write_zeroes": true, 00:17:46.545 "zcopy": false, 00:17:46.545 "get_zone_info": false, 00:17:46.545 "zone_management": false, 00:17:46.545 "zone_append": false, 00:17:46.545 "compare": false, 00:17:46.545 "compare_and_write": false, 00:17:46.545 "abort": false, 00:17:46.545 "seek_hole": false, 00:17:46.545 "seek_data": false, 00:17:46.545 "copy": false, 00:17:46.545 "nvme_iov_md": false 00:17:46.545 }, 00:17:46.545 "driver_specific": { 00:17:46.545 "raid": { 00:17:46.545 "uuid": "d7aef115-09d3-4f17-8014-9aa565c5024b", 00:17:46.545 "strip_size_kb": 64, 00:17:46.545 "state": "online", 00:17:46.545 "raid_level": "raid5f", 00:17:46.545 "superblock": true, 00:17:46.545 "num_base_bdevs": 4, 00:17:46.545 "num_base_bdevs_discovered": 4, 00:17:46.545 "num_base_bdevs_operational": 4, 00:17:46.545 "base_bdevs_list": [ 00:17:46.545 { 00:17:46.545 "name": "BaseBdev1", 00:17:46.545 "uuid": "f18ced38-634d-4295-9a3b-927fa31be1de", 00:17:46.545 "is_configured": true, 00:17:46.545 "data_offset": 2048, 00:17:46.545 "data_size": 63488 00:17:46.545 }, 00:17:46.545 { 00:17:46.545 "name": "BaseBdev2", 00:17:46.545 "uuid": "43428166-c87a-4137-8561-8ee4b0c131fc", 00:17:46.545 "is_configured": true, 00:17:46.545 "data_offset": 2048, 00:17:46.545 "data_size": 63488 00:17:46.545 }, 00:17:46.545 { 00:17:46.545 "name": "BaseBdev3", 00:17:46.545 "uuid": "3756aad1-c78a-42fe-bd95-897fa546cfbe", 00:17:46.545 "is_configured": true, 00:17:46.545 "data_offset": 2048, 00:17:46.545 "data_size": 63488 00:17:46.545 }, 00:17:46.545 { 00:17:46.545 "name": "BaseBdev4", 00:17:46.545 "uuid": "9cf82ea5-c273-4ef2-9ed6-e059676b7045", 00:17:46.545 "is_configured": true, 00:17:46.545 "data_offset": 2048, 00:17:46.545 "data_size": 63488 00:17:46.545 } 00:17:46.545 ] 00:17:46.545 } 00:17:46.545 } 00:17:46.545 }' 00:17:46.545 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.545 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:46.545 BaseBdev2 00:17:46.545 BaseBdev3 00:17:46.545 BaseBdev4' 00:17:46.545 03:28:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.545 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:46.545 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.545 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.546 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.804 [2024-11-05 03:29:00.244882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.804 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.805 "name": "Existed_Raid", 00:17:46.805 "uuid": "d7aef115-09d3-4f17-8014-9aa565c5024b", 00:17:46.805 "strip_size_kb": 64, 00:17:46.805 "state": "online", 00:17:46.805 "raid_level": "raid5f", 00:17:46.805 "superblock": true, 00:17:46.805 "num_base_bdevs": 4, 00:17:46.805 "num_base_bdevs_discovered": 3, 00:17:46.805 "num_base_bdevs_operational": 3, 00:17:46.805 "base_bdevs_list": [ 00:17:46.805 { 00:17:46.805 "name": null, 00:17:46.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.805 "is_configured": false, 00:17:46.805 "data_offset": 0, 00:17:46.805 "data_size": 63488 00:17:46.805 }, 00:17:46.805 { 00:17:46.805 "name": "BaseBdev2", 00:17:46.805 "uuid": "43428166-c87a-4137-8561-8ee4b0c131fc", 00:17:46.805 "is_configured": true, 00:17:46.805 "data_offset": 2048, 00:17:46.805 "data_size": 63488 00:17:46.805 }, 00:17:46.805 { 00:17:46.805 "name": "BaseBdev3", 00:17:46.805 "uuid": "3756aad1-c78a-42fe-bd95-897fa546cfbe", 00:17:46.805 "is_configured": true, 00:17:46.805 "data_offset": 2048, 00:17:46.805 "data_size": 63488 00:17:46.805 }, 00:17:46.805 { 00:17:46.805 "name": "BaseBdev4", 00:17:46.805 "uuid": "9cf82ea5-c273-4ef2-9ed6-e059676b7045", 00:17:46.805 "is_configured": true, 00:17:46.805 "data_offset": 2048, 00:17:46.805 "data_size": 63488 00:17:46.805 } 00:17:46.805 ] 00:17:46.805 }' 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.805 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.373 [2024-11-05 03:29:00.899888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.373 [2024-11-05 03:29:00.900115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.373 [2024-11-05 03:29:00.983620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.373 03:29:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.373 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.632 [2024-11-05 03:29:01.047693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.632 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.632 [2024-11-05 03:29:01.193592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:47.632 [2024-11-05 03:29:01.193664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.891 BaseBdev2 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.891 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.891 [ 00:17:47.891 { 00:17:47.891 "name": "BaseBdev2", 00:17:47.891 "aliases": [ 00:17:47.891 "0aee60e5-2c8f-4112-92f4-27a42e45cc43" 00:17:47.891 ], 00:17:47.891 "product_name": "Malloc disk", 00:17:47.891 "block_size": 512, 00:17:47.891 "num_blocks": 65536, 00:17:47.891 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:47.891 "assigned_rate_limits": { 00:17:47.891 "rw_ios_per_sec": 0, 00:17:47.891 "rw_mbytes_per_sec": 0, 00:17:47.891 "r_mbytes_per_sec": 0, 00:17:47.891 "w_mbytes_per_sec": 0 00:17:47.891 }, 00:17:47.891 "claimed": false, 00:17:47.891 "zoned": false, 00:17:47.891 "supported_io_types": { 00:17:47.891 "read": true, 00:17:47.891 "write": true, 00:17:47.891 "unmap": true, 00:17:47.891 "flush": true, 00:17:47.891 "reset": true, 00:17:47.891 "nvme_admin": false, 00:17:47.891 "nvme_io": false, 00:17:47.891 "nvme_io_md": false, 00:17:47.891 "write_zeroes": true, 00:17:47.891 "zcopy": true, 00:17:47.891 "get_zone_info": false, 00:17:47.891 "zone_management": false, 00:17:47.891 "zone_append": false, 00:17:47.891 "compare": false, 00:17:47.891 "compare_and_write": false, 00:17:47.891 "abort": true, 00:17:47.891 "seek_hole": false, 00:17:47.891 "seek_data": false, 00:17:47.891 "copy": true, 00:17:47.891 "nvme_iov_md": false 00:17:47.891 }, 00:17:47.891 "memory_domains": [ 00:17:47.891 { 00:17:47.891 "dma_device_id": "system", 00:17:47.891 "dma_device_type": 1 00:17:47.891 }, 00:17:47.891 { 00:17:47.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.892 "dma_device_type": 2 00:17:47.892 } 00:17:47.892 ], 00:17:47.892 "driver_specific": {} 00:17:47.892 } 00:17:47.892 ] 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.892 BaseBdev3 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.892 [ 00:17:47.892 { 00:17:47.892 "name": "BaseBdev3", 00:17:47.892 "aliases": [ 00:17:47.892 "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7" 00:17:47.892 ], 00:17:47.892 "product_name": "Malloc disk", 00:17:47.892 "block_size": 512, 00:17:47.892 "num_blocks": 65536, 00:17:47.892 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:47.892 "assigned_rate_limits": { 00:17:47.892 "rw_ios_per_sec": 0, 00:17:47.892 "rw_mbytes_per_sec": 0, 00:17:47.892 "r_mbytes_per_sec": 0, 00:17:47.892 "w_mbytes_per_sec": 0 00:17:47.892 }, 00:17:47.892 "claimed": false, 00:17:47.892 "zoned": false, 00:17:47.892 "supported_io_types": { 00:17:47.892 "read": true, 00:17:47.892 "write": true, 00:17:47.892 "unmap": true, 00:17:47.892 "flush": true, 00:17:47.892 "reset": true, 00:17:47.892 "nvme_admin": false, 00:17:47.892 "nvme_io": false, 00:17:47.892 "nvme_io_md": false, 00:17:47.892 "write_zeroes": true, 00:17:47.892 "zcopy": true, 00:17:47.892 "get_zone_info": false, 00:17:47.892 "zone_management": false, 00:17:47.892 "zone_append": false, 00:17:47.892 "compare": false, 00:17:47.892 "compare_and_write": false, 00:17:47.892 "abort": true, 00:17:47.892 "seek_hole": false, 00:17:47.892 "seek_data": false, 00:17:47.892 "copy": true, 00:17:47.892 "nvme_iov_md": false 00:17:47.892 }, 00:17:47.892 "memory_domains": [ 00:17:47.892 { 00:17:47.892 "dma_device_id": "system", 00:17:47.892 "dma_device_type": 1 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.892 "dma_device_type": 2 00:17:47.892 } 00:17:47.892 ], 00:17:47.892 "driver_specific": {} 00:17:47.892 } 00:17:47.892 ] 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.892 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 BaseBdev4 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 [ 00:17:48.152 { 00:17:48.152 "name": "BaseBdev4", 00:17:48.152 "aliases": [ 00:17:48.152 "264f8291-de47-4939-85ee-d26dbb5c5399" 00:17:48.152 ], 00:17:48.152 "product_name": "Malloc disk", 00:17:48.152 "block_size": 512, 00:17:48.152 "num_blocks": 65536, 00:17:48.152 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:48.152 "assigned_rate_limits": { 00:17:48.152 "rw_ios_per_sec": 0, 00:17:48.152 "rw_mbytes_per_sec": 0, 00:17:48.152 "r_mbytes_per_sec": 0, 00:17:48.152 "w_mbytes_per_sec": 0 00:17:48.152 }, 00:17:48.152 "claimed": false, 00:17:48.152 "zoned": false, 00:17:48.152 "supported_io_types": { 00:17:48.152 "read": true, 00:17:48.152 "write": true, 00:17:48.152 "unmap": true, 00:17:48.152 "flush": true, 00:17:48.152 "reset": true, 00:17:48.152 "nvme_admin": false, 00:17:48.152 "nvme_io": false, 00:17:48.152 "nvme_io_md": false, 00:17:48.152 "write_zeroes": true, 00:17:48.152 "zcopy": true, 00:17:48.152 "get_zone_info": false, 00:17:48.152 "zone_management": false, 00:17:48.152 "zone_append": false, 00:17:48.152 "compare": false, 00:17:48.152 "compare_and_write": false, 00:17:48.152 "abort": true, 00:17:48.152 "seek_hole": false, 00:17:48.152 "seek_data": false, 00:17:48.152 "copy": true, 00:17:48.152 "nvme_iov_md": false 00:17:48.152 }, 00:17:48.152 "memory_domains": [ 00:17:48.152 { 00:17:48.152 "dma_device_id": "system", 00:17:48.152 "dma_device_type": 1 00:17:48.152 }, 00:17:48.152 { 00:17:48.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.152 "dma_device_type": 2 00:17:48.152 } 00:17:48.152 ], 00:17:48.152 "driver_specific": {} 00:17:48.152 } 00:17:48.152 ] 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 [2024-11-05 03:29:01.572435] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.152 [2024-11-05 03:29:01.572487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.152 [2024-11-05 03:29:01.572527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.152 [2024-11-05 03:29:01.575007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.152 [2024-11-05 03:29:01.575101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.152 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.152 "name": "Existed_Raid", 00:17:48.152 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:48.152 "strip_size_kb": 64, 00:17:48.152 "state": "configuring", 00:17:48.152 "raid_level": "raid5f", 00:17:48.152 "superblock": true, 00:17:48.152 "num_base_bdevs": 4, 00:17:48.152 "num_base_bdevs_discovered": 3, 00:17:48.152 "num_base_bdevs_operational": 4, 00:17:48.152 "base_bdevs_list": [ 00:17:48.152 { 00:17:48.152 "name": "BaseBdev1", 00:17:48.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.152 "is_configured": false, 00:17:48.152 "data_offset": 0, 00:17:48.152 "data_size": 0 00:17:48.152 }, 00:17:48.152 { 00:17:48.152 "name": "BaseBdev2", 00:17:48.152 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:48.152 "is_configured": true, 00:17:48.152 "data_offset": 2048, 00:17:48.152 "data_size": 63488 00:17:48.152 }, 00:17:48.152 { 00:17:48.152 "name": "BaseBdev3", 00:17:48.152 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:48.152 "is_configured": true, 00:17:48.152 "data_offset": 2048, 00:17:48.152 "data_size": 63488 00:17:48.152 }, 00:17:48.152 { 00:17:48.152 "name": "BaseBdev4", 00:17:48.152 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:48.152 "is_configured": true, 00:17:48.152 "data_offset": 2048, 00:17:48.153 "data_size": 63488 00:17:48.153 } 00:17:48.153 ] 00:17:48.153 }' 00:17:48.153 03:29:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.153 03:29:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.720 [2024-11-05 03:29:02.096559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.720 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.720 "name": "Existed_Raid", 00:17:48.721 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:48.721 "strip_size_kb": 64, 00:17:48.721 "state": "configuring", 00:17:48.721 "raid_level": "raid5f", 00:17:48.721 "superblock": true, 00:17:48.721 "num_base_bdevs": 4, 00:17:48.721 "num_base_bdevs_discovered": 2, 00:17:48.721 "num_base_bdevs_operational": 4, 00:17:48.721 "base_bdevs_list": [ 00:17:48.721 { 00:17:48.721 "name": "BaseBdev1", 00:17:48.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.721 "is_configured": false, 00:17:48.721 "data_offset": 0, 00:17:48.721 "data_size": 0 00:17:48.721 }, 00:17:48.721 { 00:17:48.721 "name": null, 00:17:48.721 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:48.721 "is_configured": false, 00:17:48.721 "data_offset": 0, 00:17:48.721 "data_size": 63488 00:17:48.721 }, 00:17:48.721 { 00:17:48.721 "name": "BaseBdev3", 00:17:48.721 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:48.721 "is_configured": true, 00:17:48.721 "data_offset": 2048, 00:17:48.721 "data_size": 63488 00:17:48.721 }, 00:17:48.721 { 00:17:48.721 "name": "BaseBdev4", 00:17:48.721 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:48.721 "is_configured": true, 00:17:48.721 "data_offset": 2048, 00:17:48.721 "data_size": 63488 00:17:48.721 } 00:17:48.721 ] 00:17:48.721 }' 00:17:48.721 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.721 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.288 [2024-11-05 03:29:02.736171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.288 BaseBdev1 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:49.288 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.289 [ 00:17:49.289 { 00:17:49.289 "name": "BaseBdev1", 00:17:49.289 "aliases": [ 00:17:49.289 "a7b44a4f-605e-4853-beca-d3da246b3a2a" 00:17:49.289 ], 00:17:49.289 "product_name": "Malloc disk", 00:17:49.289 "block_size": 512, 00:17:49.289 "num_blocks": 65536, 00:17:49.289 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:49.289 "assigned_rate_limits": { 00:17:49.289 "rw_ios_per_sec": 0, 00:17:49.289 "rw_mbytes_per_sec": 0, 00:17:49.289 "r_mbytes_per_sec": 0, 00:17:49.289 "w_mbytes_per_sec": 0 00:17:49.289 }, 00:17:49.289 "claimed": true, 00:17:49.289 "claim_type": "exclusive_write", 00:17:49.289 "zoned": false, 00:17:49.289 "supported_io_types": { 00:17:49.289 "read": true, 00:17:49.289 "write": true, 00:17:49.289 "unmap": true, 00:17:49.289 "flush": true, 00:17:49.289 "reset": true, 00:17:49.289 "nvme_admin": false, 00:17:49.289 "nvme_io": false, 00:17:49.289 "nvme_io_md": false, 00:17:49.289 "write_zeroes": true, 00:17:49.289 "zcopy": true, 00:17:49.289 "get_zone_info": false, 00:17:49.289 "zone_management": false, 00:17:49.289 "zone_append": false, 00:17:49.289 "compare": false, 00:17:49.289 "compare_and_write": false, 00:17:49.289 "abort": true, 00:17:49.289 "seek_hole": false, 00:17:49.289 "seek_data": false, 00:17:49.289 "copy": true, 00:17:49.289 "nvme_iov_md": false 00:17:49.289 }, 00:17:49.289 "memory_domains": [ 00:17:49.289 { 00:17:49.289 "dma_device_id": "system", 00:17:49.289 "dma_device_type": 1 00:17:49.289 }, 00:17:49.289 { 00:17:49.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.289 "dma_device_type": 2 00:17:49.289 } 00:17:49.289 ], 00:17:49.289 "driver_specific": {} 00:17:49.289 } 00:17:49.289 ] 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.289 "name": "Existed_Raid", 00:17:49.289 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:49.289 "strip_size_kb": 64, 00:17:49.289 "state": "configuring", 00:17:49.289 "raid_level": "raid5f", 00:17:49.289 "superblock": true, 00:17:49.289 "num_base_bdevs": 4, 00:17:49.289 "num_base_bdevs_discovered": 3, 00:17:49.289 "num_base_bdevs_operational": 4, 00:17:49.289 "base_bdevs_list": [ 00:17:49.289 { 00:17:49.289 "name": "BaseBdev1", 00:17:49.289 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:49.289 "is_configured": true, 00:17:49.289 "data_offset": 2048, 00:17:49.289 "data_size": 63488 00:17:49.289 }, 00:17:49.289 { 00:17:49.289 "name": null, 00:17:49.289 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:49.289 "is_configured": false, 00:17:49.289 "data_offset": 0, 00:17:49.289 "data_size": 63488 00:17:49.289 }, 00:17:49.289 { 00:17:49.289 "name": "BaseBdev3", 00:17:49.289 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:49.289 "is_configured": true, 00:17:49.289 "data_offset": 2048, 00:17:49.289 "data_size": 63488 00:17:49.289 }, 00:17:49.289 { 00:17:49.289 "name": "BaseBdev4", 00:17:49.289 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:49.289 "is_configured": true, 00:17:49.289 "data_offset": 2048, 00:17:49.289 "data_size": 63488 00:17:49.289 } 00:17:49.289 ] 00:17:49.289 }' 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.289 03:29:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 [2024-11-05 03:29:03.348472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.857 "name": "Existed_Raid", 00:17:49.857 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:49.857 "strip_size_kb": 64, 00:17:49.857 "state": "configuring", 00:17:49.857 "raid_level": "raid5f", 00:17:49.857 "superblock": true, 00:17:49.857 "num_base_bdevs": 4, 00:17:49.857 "num_base_bdevs_discovered": 2, 00:17:49.857 "num_base_bdevs_operational": 4, 00:17:49.857 "base_bdevs_list": [ 00:17:49.857 { 00:17:49.857 "name": "BaseBdev1", 00:17:49.857 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:49.857 "is_configured": true, 00:17:49.857 "data_offset": 2048, 00:17:49.857 "data_size": 63488 00:17:49.857 }, 00:17:49.857 { 00:17:49.857 "name": null, 00:17:49.857 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:49.857 "is_configured": false, 00:17:49.857 "data_offset": 0, 00:17:49.857 "data_size": 63488 00:17:49.857 }, 00:17:49.857 { 00:17:49.857 "name": null, 00:17:49.857 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:49.857 "is_configured": false, 00:17:49.857 "data_offset": 0, 00:17:49.857 "data_size": 63488 00:17:49.857 }, 00:17:49.857 { 00:17:49.857 "name": "BaseBdev4", 00:17:49.857 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:49.857 "is_configured": true, 00:17:49.857 "data_offset": 2048, 00:17:49.857 "data_size": 63488 00:17:49.857 } 00:17:49.857 ] 00:17:49.857 }' 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.857 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.425 [2024-11-05 03:29:03.984715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.425 03:29:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.425 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.425 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.425 "name": "Existed_Raid", 00:17:50.425 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:50.425 "strip_size_kb": 64, 00:17:50.425 "state": "configuring", 00:17:50.425 "raid_level": "raid5f", 00:17:50.425 "superblock": true, 00:17:50.425 "num_base_bdevs": 4, 00:17:50.425 "num_base_bdevs_discovered": 3, 00:17:50.425 "num_base_bdevs_operational": 4, 00:17:50.425 "base_bdevs_list": [ 00:17:50.425 { 00:17:50.425 "name": "BaseBdev1", 00:17:50.425 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:50.425 "is_configured": true, 00:17:50.425 "data_offset": 2048, 00:17:50.425 "data_size": 63488 00:17:50.425 }, 00:17:50.425 { 00:17:50.425 "name": null, 00:17:50.425 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:50.425 "is_configured": false, 00:17:50.425 "data_offset": 0, 00:17:50.425 "data_size": 63488 00:17:50.425 }, 00:17:50.425 { 00:17:50.425 "name": "BaseBdev3", 00:17:50.425 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:50.425 "is_configured": true, 00:17:50.425 "data_offset": 2048, 00:17:50.425 "data_size": 63488 00:17:50.425 }, 00:17:50.425 { 00:17:50.425 "name": "BaseBdev4", 00:17:50.425 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:50.425 "is_configured": true, 00:17:50.425 "data_offset": 2048, 00:17:50.425 "data_size": 63488 00:17:50.425 } 00:17:50.425 ] 00:17:50.425 }' 00:17:50.425 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.425 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.013 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.271 [2024-11-05 03:29:04.653066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.271 "name": "Existed_Raid", 00:17:51.271 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:51.271 "strip_size_kb": 64, 00:17:51.271 "state": "configuring", 00:17:51.271 "raid_level": "raid5f", 00:17:51.271 "superblock": true, 00:17:51.271 "num_base_bdevs": 4, 00:17:51.271 "num_base_bdevs_discovered": 2, 00:17:51.271 "num_base_bdevs_operational": 4, 00:17:51.271 "base_bdevs_list": [ 00:17:51.271 { 00:17:51.271 "name": null, 00:17:51.271 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:51.271 "is_configured": false, 00:17:51.271 "data_offset": 0, 00:17:51.271 "data_size": 63488 00:17:51.271 }, 00:17:51.271 { 00:17:51.271 "name": null, 00:17:51.271 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:51.271 "is_configured": false, 00:17:51.271 "data_offset": 0, 00:17:51.271 "data_size": 63488 00:17:51.271 }, 00:17:51.271 { 00:17:51.271 "name": "BaseBdev3", 00:17:51.271 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:51.271 "is_configured": true, 00:17:51.271 "data_offset": 2048, 00:17:51.271 "data_size": 63488 00:17:51.271 }, 00:17:51.271 { 00:17:51.271 "name": "BaseBdev4", 00:17:51.271 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:51.271 "is_configured": true, 00:17:51.271 "data_offset": 2048, 00:17:51.271 "data_size": 63488 00:17:51.271 } 00:17:51.271 ] 00:17:51.271 }' 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.271 03:29:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.838 [2024-11-05 03:29:05.359387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.838 "name": "Existed_Raid", 00:17:51.838 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:51.838 "strip_size_kb": 64, 00:17:51.838 "state": "configuring", 00:17:51.838 "raid_level": "raid5f", 00:17:51.838 "superblock": true, 00:17:51.838 "num_base_bdevs": 4, 00:17:51.838 "num_base_bdevs_discovered": 3, 00:17:51.838 "num_base_bdevs_operational": 4, 00:17:51.838 "base_bdevs_list": [ 00:17:51.838 { 00:17:51.838 "name": null, 00:17:51.838 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:51.838 "is_configured": false, 00:17:51.838 "data_offset": 0, 00:17:51.838 "data_size": 63488 00:17:51.838 }, 00:17:51.838 { 00:17:51.838 "name": "BaseBdev2", 00:17:51.838 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:51.838 "is_configured": true, 00:17:51.838 "data_offset": 2048, 00:17:51.838 "data_size": 63488 00:17:51.838 }, 00:17:51.838 { 00:17:51.838 "name": "BaseBdev3", 00:17:51.838 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:51.838 "is_configured": true, 00:17:51.838 "data_offset": 2048, 00:17:51.838 "data_size": 63488 00:17:51.838 }, 00:17:51.838 { 00:17:51.838 "name": "BaseBdev4", 00:17:51.838 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:51.838 "is_configured": true, 00:17:51.838 "data_offset": 2048, 00:17:51.838 "data_size": 63488 00:17:51.838 } 00:17:51.838 ] 00:17:51.838 }' 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.838 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a7b44a4f-605e-4853-beca-d3da246b3a2a 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.406 03:29:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 [2024-11-05 03:29:06.034846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:52.406 [2024-11-05 03:29:06.035189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.406 [2024-11-05 03:29:06.035222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:52.406 NewBaseBdev 00:17:52.406 [2024-11-05 03:29:06.035647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.406 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 [2024-11-05 03:29:06.042345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.665 [2024-11-05 03:29:06.042544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:52.665 [2024-11-05 03:29:06.042875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.665 [ 00:17:52.665 { 00:17:52.665 "name": "NewBaseBdev", 00:17:52.665 "aliases": [ 00:17:52.665 "a7b44a4f-605e-4853-beca-d3da246b3a2a" 00:17:52.665 ], 00:17:52.665 "product_name": "Malloc disk", 00:17:52.665 "block_size": 512, 00:17:52.665 "num_blocks": 65536, 00:17:52.665 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:52.665 "assigned_rate_limits": { 00:17:52.665 "rw_ios_per_sec": 0, 00:17:52.665 "rw_mbytes_per_sec": 0, 00:17:52.665 "r_mbytes_per_sec": 0, 00:17:52.665 "w_mbytes_per_sec": 0 00:17:52.665 }, 00:17:52.665 "claimed": true, 00:17:52.665 "claim_type": "exclusive_write", 00:17:52.665 "zoned": false, 00:17:52.665 "supported_io_types": { 00:17:52.665 "read": true, 00:17:52.665 "write": true, 00:17:52.665 "unmap": true, 00:17:52.665 "flush": true, 00:17:52.665 "reset": true, 00:17:52.665 "nvme_admin": false, 00:17:52.665 "nvme_io": false, 00:17:52.665 "nvme_io_md": false, 00:17:52.665 "write_zeroes": true, 00:17:52.665 "zcopy": true, 00:17:52.665 "get_zone_info": false, 00:17:52.665 "zone_management": false, 00:17:52.665 "zone_append": false, 00:17:52.665 "compare": false, 00:17:52.665 "compare_and_write": false, 00:17:52.665 "abort": true, 00:17:52.665 "seek_hole": false, 00:17:52.665 "seek_data": false, 00:17:52.665 "copy": true, 00:17:52.665 "nvme_iov_md": false 00:17:52.665 }, 00:17:52.665 "memory_domains": [ 00:17:52.665 { 00:17:52.665 "dma_device_id": "system", 00:17:52.665 "dma_device_type": 1 00:17:52.665 }, 00:17:52.665 { 00:17:52.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.665 "dma_device_type": 2 00:17:52.665 } 00:17:52.665 ], 00:17:52.665 "driver_specific": {} 00:17:52.665 } 00:17:52.665 ] 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.665 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.665 "name": "Existed_Raid", 00:17:52.665 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:52.665 "strip_size_kb": 64, 00:17:52.665 "state": "online", 00:17:52.665 "raid_level": "raid5f", 00:17:52.665 "superblock": true, 00:17:52.665 "num_base_bdevs": 4, 00:17:52.665 "num_base_bdevs_discovered": 4, 00:17:52.665 "num_base_bdevs_operational": 4, 00:17:52.665 "base_bdevs_list": [ 00:17:52.665 { 00:17:52.665 "name": "NewBaseBdev", 00:17:52.665 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:52.666 "is_configured": true, 00:17:52.666 "data_offset": 2048, 00:17:52.666 "data_size": 63488 00:17:52.666 }, 00:17:52.666 { 00:17:52.666 "name": "BaseBdev2", 00:17:52.666 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:52.666 "is_configured": true, 00:17:52.666 "data_offset": 2048, 00:17:52.666 "data_size": 63488 00:17:52.666 }, 00:17:52.666 { 00:17:52.666 "name": "BaseBdev3", 00:17:52.666 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:52.666 "is_configured": true, 00:17:52.666 "data_offset": 2048, 00:17:52.666 "data_size": 63488 00:17:52.666 }, 00:17:52.666 { 00:17:52.666 "name": "BaseBdev4", 00:17:52.666 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:52.666 "is_configured": true, 00:17:52.666 "data_offset": 2048, 00:17:52.666 "data_size": 63488 00:17:52.666 } 00:17:52.666 ] 00:17:52.666 }' 00:17:52.666 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.666 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.234 [2024-11-05 03:29:06.638465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.234 "name": "Existed_Raid", 00:17:53.234 "aliases": [ 00:17:53.234 "83e83b2e-df7b-4580-b4d3-dd2091279318" 00:17:53.234 ], 00:17:53.234 "product_name": "Raid Volume", 00:17:53.234 "block_size": 512, 00:17:53.234 "num_blocks": 190464, 00:17:53.234 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:53.234 "assigned_rate_limits": { 00:17:53.234 "rw_ios_per_sec": 0, 00:17:53.234 "rw_mbytes_per_sec": 0, 00:17:53.234 "r_mbytes_per_sec": 0, 00:17:53.234 "w_mbytes_per_sec": 0 00:17:53.234 }, 00:17:53.234 "claimed": false, 00:17:53.234 "zoned": false, 00:17:53.234 "supported_io_types": { 00:17:53.234 "read": true, 00:17:53.234 "write": true, 00:17:53.234 "unmap": false, 00:17:53.234 "flush": false, 00:17:53.234 "reset": true, 00:17:53.234 "nvme_admin": false, 00:17:53.234 "nvme_io": false, 00:17:53.234 "nvme_io_md": false, 00:17:53.234 "write_zeroes": true, 00:17:53.234 "zcopy": false, 00:17:53.234 "get_zone_info": false, 00:17:53.234 "zone_management": false, 00:17:53.234 "zone_append": false, 00:17:53.234 "compare": false, 00:17:53.234 "compare_and_write": false, 00:17:53.234 "abort": false, 00:17:53.234 "seek_hole": false, 00:17:53.234 "seek_data": false, 00:17:53.234 "copy": false, 00:17:53.234 "nvme_iov_md": false 00:17:53.234 }, 00:17:53.234 "driver_specific": { 00:17:53.234 "raid": { 00:17:53.234 "uuid": "83e83b2e-df7b-4580-b4d3-dd2091279318", 00:17:53.234 "strip_size_kb": 64, 00:17:53.234 "state": "online", 00:17:53.234 "raid_level": "raid5f", 00:17:53.234 "superblock": true, 00:17:53.234 "num_base_bdevs": 4, 00:17:53.234 "num_base_bdevs_discovered": 4, 00:17:53.234 "num_base_bdevs_operational": 4, 00:17:53.234 "base_bdevs_list": [ 00:17:53.234 { 00:17:53.234 "name": "NewBaseBdev", 00:17:53.234 "uuid": "a7b44a4f-605e-4853-beca-d3da246b3a2a", 00:17:53.234 "is_configured": true, 00:17:53.234 "data_offset": 2048, 00:17:53.234 "data_size": 63488 00:17:53.234 }, 00:17:53.234 { 00:17:53.234 "name": "BaseBdev2", 00:17:53.234 "uuid": "0aee60e5-2c8f-4112-92f4-27a42e45cc43", 00:17:53.234 "is_configured": true, 00:17:53.234 "data_offset": 2048, 00:17:53.234 "data_size": 63488 00:17:53.234 }, 00:17:53.234 { 00:17:53.234 "name": "BaseBdev3", 00:17:53.234 "uuid": "b069a0ea-4e51-4b13-bc6e-d7fb5e5f35f7", 00:17:53.234 "is_configured": true, 00:17:53.234 "data_offset": 2048, 00:17:53.234 "data_size": 63488 00:17:53.234 }, 00:17:53.234 { 00:17:53.234 "name": "BaseBdev4", 00:17:53.234 "uuid": "264f8291-de47-4939-85ee-d26dbb5c5399", 00:17:53.234 "is_configured": true, 00:17:53.234 "data_offset": 2048, 00:17:53.234 "data_size": 63488 00:17:53.234 } 00:17:53.234 ] 00:17:53.234 } 00:17:53.234 } 00:17:53.234 }' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:53.234 BaseBdev2 00:17:53.234 BaseBdev3 00:17:53.234 BaseBdev4' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.234 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.494 03:29:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.494 [2024-11-05 03:29:07.006281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.494 [2024-11-05 03:29:07.006354] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.494 [2024-11-05 03:29:07.006465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.494 [2024-11-05 03:29:07.006875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.494 [2024-11-05 03:29:07.006902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83600 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83600 ']' 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83600 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83600 00:17:53.494 killing process with pid 83600 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83600' 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83600 00:17:53.494 [2024-11-05 03:29:07.048679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.494 03:29:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83600 00:17:54.062 [2024-11-05 03:29:07.431057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.008 03:29:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:55.008 00:17:55.008 real 0m13.327s 00:17:55.008 user 0m22.157s 00:17:55.008 sys 0m1.854s 00:17:55.008 03:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.008 ************************************ 00:17:55.008 END TEST raid5f_state_function_test_sb 00:17:55.008 ************************************ 00:17:55.008 03:29:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.008 03:29:08 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:55.008 03:29:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:55.008 03:29:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:55.008 03:29:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.008 ************************************ 00:17:55.008 START TEST raid5f_superblock_test 00:17:55.008 ************************************ 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84283 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84283 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84283 ']' 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.008 03:29:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.293 [2024-11-05 03:29:08.672055] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:17:55.293 [2024-11-05 03:29:08.672519] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84283 ] 00:17:55.293 [2024-11-05 03:29:08.863475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.552 [2024-11-05 03:29:09.018634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.811 [2024-11-05 03:29:09.257634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.811 [2024-11-05 03:29:09.257701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 malloc1 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 [2024-11-05 03:29:09.798074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:56.379 [2024-11-05 03:29:09.798172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.379 [2024-11-05 03:29:09.798216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.379 [2024-11-05 03:29:09.798233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.379 [2024-11-05 03:29:09.801237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.379 [2024-11-05 03:29:09.801288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:56.379 pt1 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 malloc2 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 [2024-11-05 03:29:09.852365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.379 [2024-11-05 03:29:09.852437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.379 [2024-11-05 03:29:09.852469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.379 [2024-11-05 03:29:09.852484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.379 [2024-11-05 03:29:09.855460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.379 [2024-11-05 03:29:09.855513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.379 pt2 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 malloc3 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 [2024-11-05 03:29:09.924035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:56.379 [2024-11-05 03:29:09.924133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.379 [2024-11-05 03:29:09.924175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:56.379 [2024-11-05 03:29:09.924196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.379 [2024-11-05 03:29:09.927904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.379 [2024-11-05 03:29:09.927969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:56.379 pt3 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 malloc4 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 [2024-11-05 03:29:09.986390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:56.379 [2024-11-05 03:29:09.986629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.379 [2024-11-05 03:29:09.986683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:56.379 [2024-11-05 03:29:09.986704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.379 [2024-11-05 03:29:09.990251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.379 pt4 00:17:56.379 [2024-11-05 03:29:09.990472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.379 03:29:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.379 [2024-11-05 03:29:09.998777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:56.379 [2024-11-05 03:29:10.001902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.379 [2024-11-05 03:29:10.002238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:56.379 [2024-11-05 03:29:10.002432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:56.379 [2024-11-05 03:29:10.002764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.379 [2024-11-05 03:29:10.002793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:56.379 [2024-11-05 03:29:10.003260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:56.379 [2024-11-05 03:29:10.012604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.379 [2024-11-05 03:29:10.012793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.379 [2024-11-05 03:29:10.013167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.637 "name": "raid_bdev1", 00:17:56.637 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:56.637 "strip_size_kb": 64, 00:17:56.637 "state": "online", 00:17:56.637 "raid_level": "raid5f", 00:17:56.637 "superblock": true, 00:17:56.637 "num_base_bdevs": 4, 00:17:56.637 "num_base_bdevs_discovered": 4, 00:17:56.637 "num_base_bdevs_operational": 4, 00:17:56.637 "base_bdevs_list": [ 00:17:56.637 { 00:17:56.637 "name": "pt1", 00:17:56.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.637 "is_configured": true, 00:17:56.637 "data_offset": 2048, 00:17:56.637 "data_size": 63488 00:17:56.637 }, 00:17:56.637 { 00:17:56.637 "name": "pt2", 00:17:56.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.637 "is_configured": true, 00:17:56.637 "data_offset": 2048, 00:17:56.637 "data_size": 63488 00:17:56.637 }, 00:17:56.637 { 00:17:56.637 "name": "pt3", 00:17:56.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.637 "is_configured": true, 00:17:56.637 "data_offset": 2048, 00:17:56.637 "data_size": 63488 00:17:56.637 }, 00:17:56.637 { 00:17:56.637 "name": "pt4", 00:17:56.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:56.637 "is_configured": true, 00:17:56.637 "data_offset": 2048, 00:17:56.637 "data_size": 63488 00:17:56.637 } 00:17:56.637 ] 00:17:56.637 }' 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.637 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.205 [2024-11-05 03:29:10.592126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.205 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.205 "name": "raid_bdev1", 00:17:57.205 "aliases": [ 00:17:57.205 "ddd88b02-9397-4155-9d4f-a5b757efdd96" 00:17:57.205 ], 00:17:57.205 "product_name": "Raid Volume", 00:17:57.205 "block_size": 512, 00:17:57.205 "num_blocks": 190464, 00:17:57.205 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:57.205 "assigned_rate_limits": { 00:17:57.205 "rw_ios_per_sec": 0, 00:17:57.205 "rw_mbytes_per_sec": 0, 00:17:57.205 "r_mbytes_per_sec": 0, 00:17:57.205 "w_mbytes_per_sec": 0 00:17:57.205 }, 00:17:57.205 "claimed": false, 00:17:57.205 "zoned": false, 00:17:57.205 "supported_io_types": { 00:17:57.205 "read": true, 00:17:57.205 "write": true, 00:17:57.205 "unmap": false, 00:17:57.205 "flush": false, 00:17:57.205 "reset": true, 00:17:57.205 "nvme_admin": false, 00:17:57.205 "nvme_io": false, 00:17:57.205 "nvme_io_md": false, 00:17:57.205 "write_zeroes": true, 00:17:57.205 "zcopy": false, 00:17:57.205 "get_zone_info": false, 00:17:57.205 "zone_management": false, 00:17:57.205 "zone_append": false, 00:17:57.205 "compare": false, 00:17:57.205 "compare_and_write": false, 00:17:57.205 "abort": false, 00:17:57.205 "seek_hole": false, 00:17:57.205 "seek_data": false, 00:17:57.205 "copy": false, 00:17:57.205 "nvme_iov_md": false 00:17:57.205 }, 00:17:57.205 "driver_specific": { 00:17:57.205 "raid": { 00:17:57.205 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:57.205 "strip_size_kb": 64, 00:17:57.205 "state": "online", 00:17:57.205 "raid_level": "raid5f", 00:17:57.205 "superblock": true, 00:17:57.205 "num_base_bdevs": 4, 00:17:57.205 "num_base_bdevs_discovered": 4, 00:17:57.205 "num_base_bdevs_operational": 4, 00:17:57.205 "base_bdevs_list": [ 00:17:57.205 { 00:17:57.205 "name": "pt1", 00:17:57.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.205 "is_configured": true, 00:17:57.205 "data_offset": 2048, 00:17:57.205 "data_size": 63488 00:17:57.205 }, 00:17:57.205 { 00:17:57.206 "name": "pt2", 00:17:57.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.206 "is_configured": true, 00:17:57.206 "data_offset": 2048, 00:17:57.206 "data_size": 63488 00:17:57.206 }, 00:17:57.206 { 00:17:57.206 "name": "pt3", 00:17:57.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.206 "is_configured": true, 00:17:57.206 "data_offset": 2048, 00:17:57.206 "data_size": 63488 00:17:57.206 }, 00:17:57.206 { 00:17:57.206 "name": "pt4", 00:17:57.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.206 "is_configured": true, 00:17:57.206 "data_offset": 2048, 00:17:57.206 "data_size": 63488 00:17:57.206 } 00:17:57.206 ] 00:17:57.206 } 00:17:57.206 } 00:17:57.206 }' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:57.206 pt2 00:17:57.206 pt3 00:17:57.206 pt4' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.206 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.464 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.465 03:29:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:57.465 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.465 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.465 [2024-11-05 03:29:10.968229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.465 03:29:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ddd88b02-9397-4155-9d4f-a5b757efdd96 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ddd88b02-9397-4155-9d4f-a5b757efdd96 ']' 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.465 [2024-11-05 03:29:11.012010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.465 [2024-11-05 03:29:11.012041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.465 [2024-11-05 03:29:11.012144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.465 [2024-11-05 03:29:11.012278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.465 [2024-11-05 03:29:11.012303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.465 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 [2024-11-05 03:29:11.168068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:57.724 [2024-11-05 03:29:11.170773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:57.724 [2024-11-05 03:29:11.170839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:57.724 [2024-11-05 03:29:11.170893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:57.724 [2024-11-05 03:29:11.170995] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:57.724 [2024-11-05 03:29:11.171085] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:57.724 [2024-11-05 03:29:11.171118] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:57.724 [2024-11-05 03:29:11.171147] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:57.724 [2024-11-05 03:29:11.171169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.724 [2024-11-05 03:29:11.171201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:57.724 request: 00:17:57.724 { 00:17:57.724 "name": "raid_bdev1", 00:17:57.724 "raid_level": "raid5f", 00:17:57.724 "base_bdevs": [ 00:17:57.724 "malloc1", 00:17:57.724 "malloc2", 00:17:57.724 "malloc3", 00:17:57.724 "malloc4" 00:17:57.724 ], 00:17:57.724 "strip_size_kb": 64, 00:17:57.724 "superblock": false, 00:17:57.724 "method": "bdev_raid_create", 00:17:57.724 "req_id": 1 00:17:57.724 } 00:17:57.724 Got JSON-RPC error response 00:17:57.724 response: 00:17:57.724 { 00:17:57.724 "code": -17, 00:17:57.724 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:57.724 } 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.724 [2024-11-05 03:29:11.240154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.724 [2024-11-05 03:29:11.240423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.724 [2024-11-05 03:29:11.240496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:57.724 [2024-11-05 03:29:11.240607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.724 [2024-11-05 03:29:11.243853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.724 [2024-11-05 03:29:11.244074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.724 [2024-11-05 03:29:11.244329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:57.724 [2024-11-05 03:29:11.244557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.724 pt1 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.724 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.725 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.725 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.725 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.725 "name": "raid_bdev1", 00:17:57.725 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:57.725 "strip_size_kb": 64, 00:17:57.725 "state": "configuring", 00:17:57.725 "raid_level": "raid5f", 00:17:57.725 "superblock": true, 00:17:57.725 "num_base_bdevs": 4, 00:17:57.725 "num_base_bdevs_discovered": 1, 00:17:57.725 "num_base_bdevs_operational": 4, 00:17:57.725 "base_bdevs_list": [ 00:17:57.725 { 00:17:57.725 "name": "pt1", 00:17:57.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.725 "is_configured": true, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 }, 00:17:57.725 { 00:17:57.725 "name": null, 00:17:57.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.725 "is_configured": false, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 }, 00:17:57.725 { 00:17:57.725 "name": null, 00:17:57.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.725 "is_configured": false, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 }, 00:17:57.725 { 00:17:57.725 "name": null, 00:17:57.725 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.725 "is_configured": false, 00:17:57.725 "data_offset": 2048, 00:17:57.725 "data_size": 63488 00:17:57.725 } 00:17:57.725 ] 00:17:57.725 }' 00:17:57.725 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.725 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.293 [2024-11-05 03:29:11.780626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.293 [2024-11-05 03:29:11.780727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.293 [2024-11-05 03:29:11.780757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:58.293 [2024-11-05 03:29:11.780776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.293 [2024-11-05 03:29:11.781371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.293 [2024-11-05 03:29:11.781408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.293 [2024-11-05 03:29:11.781505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.293 [2024-11-05 03:29:11.781543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.293 pt2 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.293 [2024-11-05 03:29:11.788613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.293 "name": "raid_bdev1", 00:17:58.293 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:58.293 "strip_size_kb": 64, 00:17:58.293 "state": "configuring", 00:17:58.293 "raid_level": "raid5f", 00:17:58.293 "superblock": true, 00:17:58.293 "num_base_bdevs": 4, 00:17:58.293 "num_base_bdevs_discovered": 1, 00:17:58.293 "num_base_bdevs_operational": 4, 00:17:58.293 "base_bdevs_list": [ 00:17:58.293 { 00:17:58.293 "name": "pt1", 00:17:58.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.293 "is_configured": true, 00:17:58.293 "data_offset": 2048, 00:17:58.293 "data_size": 63488 00:17:58.293 }, 00:17:58.293 { 00:17:58.293 "name": null, 00:17:58.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.293 "is_configured": false, 00:17:58.293 "data_offset": 0, 00:17:58.293 "data_size": 63488 00:17:58.293 }, 00:17:58.293 { 00:17:58.293 "name": null, 00:17:58.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.293 "is_configured": false, 00:17:58.293 "data_offset": 2048, 00:17:58.293 "data_size": 63488 00:17:58.293 }, 00:17:58.293 { 00:17:58.293 "name": null, 00:17:58.293 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.293 "is_configured": false, 00:17:58.293 "data_offset": 2048, 00:17:58.293 "data_size": 63488 00:17:58.293 } 00:17:58.293 ] 00:17:58.293 }' 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.293 03:29:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.863 [2024-11-05 03:29:12.316779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.863 [2024-11-05 03:29:12.316862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.863 [2024-11-05 03:29:12.316893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:58.863 [2024-11-05 03:29:12.316908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.863 [2024-11-05 03:29:12.317480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.863 [2024-11-05 03:29:12.317513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.863 [2024-11-05 03:29:12.317618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.863 [2024-11-05 03:29:12.317650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.863 pt2 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.863 [2024-11-05 03:29:12.328761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.863 [2024-11-05 03:29:12.328833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.863 [2024-11-05 03:29:12.328875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:58.863 [2024-11-05 03:29:12.328888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.863 [2024-11-05 03:29:12.329333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.863 [2024-11-05 03:29:12.329377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.863 [2024-11-05 03:29:12.329459] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:58.863 [2024-11-05 03:29:12.329488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.863 pt3 00:17:58.863 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.864 [2024-11-05 03:29:12.336734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:58.864 [2024-11-05 03:29:12.336808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.864 [2024-11-05 03:29:12.336851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:58.864 [2024-11-05 03:29:12.336864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.864 [2024-11-05 03:29:12.337333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.864 [2024-11-05 03:29:12.337378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:58.864 [2024-11-05 03:29:12.337459] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:58.864 [2024-11-05 03:29:12.337487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:58.864 [2024-11-05 03:29:12.337661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.864 [2024-11-05 03:29:12.337827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:58.864 [2024-11-05 03:29:12.338151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.864 [2024-11-05 03:29:12.344592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.864 pt4 00:17:58.864 [2024-11-05 03:29:12.344784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:58.864 [2024-11-05 03:29:12.345020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.864 "name": "raid_bdev1", 00:17:58.864 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:58.864 "strip_size_kb": 64, 00:17:58.864 "state": "online", 00:17:58.864 "raid_level": "raid5f", 00:17:58.864 "superblock": true, 00:17:58.864 "num_base_bdevs": 4, 00:17:58.864 "num_base_bdevs_discovered": 4, 00:17:58.864 "num_base_bdevs_operational": 4, 00:17:58.864 "base_bdevs_list": [ 00:17:58.864 { 00:17:58.864 "name": "pt1", 00:17:58.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.864 "is_configured": true, 00:17:58.864 "data_offset": 2048, 00:17:58.864 "data_size": 63488 00:17:58.864 }, 00:17:58.864 { 00:17:58.864 "name": "pt2", 00:17:58.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.864 "is_configured": true, 00:17:58.864 "data_offset": 2048, 00:17:58.864 "data_size": 63488 00:17:58.864 }, 00:17:58.864 { 00:17:58.864 "name": "pt3", 00:17:58.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.864 "is_configured": true, 00:17:58.864 "data_offset": 2048, 00:17:58.864 "data_size": 63488 00:17:58.864 }, 00:17:58.864 { 00:17:58.864 "name": "pt4", 00:17:58.864 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.864 "is_configured": true, 00:17:58.864 "data_offset": 2048, 00:17:58.864 "data_size": 63488 00:17:58.864 } 00:17:58.864 ] 00:17:58.864 }' 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.864 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.472 [2024-11-05 03:29:12.872922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.472 "name": "raid_bdev1", 00:17:59.472 "aliases": [ 00:17:59.472 "ddd88b02-9397-4155-9d4f-a5b757efdd96" 00:17:59.472 ], 00:17:59.472 "product_name": "Raid Volume", 00:17:59.472 "block_size": 512, 00:17:59.472 "num_blocks": 190464, 00:17:59.472 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:59.472 "assigned_rate_limits": { 00:17:59.472 "rw_ios_per_sec": 0, 00:17:59.472 "rw_mbytes_per_sec": 0, 00:17:59.472 "r_mbytes_per_sec": 0, 00:17:59.472 "w_mbytes_per_sec": 0 00:17:59.472 }, 00:17:59.472 "claimed": false, 00:17:59.472 "zoned": false, 00:17:59.472 "supported_io_types": { 00:17:59.472 "read": true, 00:17:59.472 "write": true, 00:17:59.472 "unmap": false, 00:17:59.472 "flush": false, 00:17:59.472 "reset": true, 00:17:59.472 "nvme_admin": false, 00:17:59.472 "nvme_io": false, 00:17:59.472 "nvme_io_md": false, 00:17:59.472 "write_zeroes": true, 00:17:59.472 "zcopy": false, 00:17:59.472 "get_zone_info": false, 00:17:59.472 "zone_management": false, 00:17:59.472 "zone_append": false, 00:17:59.472 "compare": false, 00:17:59.472 "compare_and_write": false, 00:17:59.472 "abort": false, 00:17:59.472 "seek_hole": false, 00:17:59.472 "seek_data": false, 00:17:59.472 "copy": false, 00:17:59.472 "nvme_iov_md": false 00:17:59.472 }, 00:17:59.472 "driver_specific": { 00:17:59.472 "raid": { 00:17:59.472 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:59.472 "strip_size_kb": 64, 00:17:59.472 "state": "online", 00:17:59.472 "raid_level": "raid5f", 00:17:59.472 "superblock": true, 00:17:59.472 "num_base_bdevs": 4, 00:17:59.472 "num_base_bdevs_discovered": 4, 00:17:59.472 "num_base_bdevs_operational": 4, 00:17:59.472 "base_bdevs_list": [ 00:17:59.472 { 00:17:59.472 "name": "pt1", 00:17:59.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.472 "is_configured": true, 00:17:59.472 "data_offset": 2048, 00:17:59.472 "data_size": 63488 00:17:59.472 }, 00:17:59.472 { 00:17:59.472 "name": "pt2", 00:17:59.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.472 "is_configured": true, 00:17:59.472 "data_offset": 2048, 00:17:59.472 "data_size": 63488 00:17:59.472 }, 00:17:59.472 { 00:17:59.472 "name": "pt3", 00:17:59.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.472 "is_configured": true, 00:17:59.472 "data_offset": 2048, 00:17:59.472 "data_size": 63488 00:17:59.472 }, 00:17:59.472 { 00:17:59.472 "name": "pt4", 00:17:59.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.472 "is_configured": true, 00:17:59.472 "data_offset": 2048, 00:17:59.472 "data_size": 63488 00:17:59.472 } 00:17:59.472 ] 00:17:59.472 } 00:17:59.472 } 00:17:59.472 }' 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.472 pt2 00:17:59.472 pt3 00:17:59.472 pt4' 00:17:59.472 03:29:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.472 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:59.732 [2024-11-05 03:29:13.224941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ddd88b02-9397-4155-9d4f-a5b757efdd96 '!=' ddd88b02-9397-4155-9d4f-a5b757efdd96 ']' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.732 [2024-11-05 03:29:13.276884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.732 "name": "raid_bdev1", 00:17:59.732 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:17:59.732 "strip_size_kb": 64, 00:17:59.732 "state": "online", 00:17:59.732 "raid_level": "raid5f", 00:17:59.732 "superblock": true, 00:17:59.732 "num_base_bdevs": 4, 00:17:59.732 "num_base_bdevs_discovered": 3, 00:17:59.732 "num_base_bdevs_operational": 3, 00:17:59.732 "base_bdevs_list": [ 00:17:59.732 { 00:17:59.732 "name": null, 00:17:59.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.732 "is_configured": false, 00:17:59.732 "data_offset": 0, 00:17:59.732 "data_size": 63488 00:17:59.732 }, 00:17:59.732 { 00:17:59.732 "name": "pt2", 00:17:59.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.732 "is_configured": true, 00:17:59.732 "data_offset": 2048, 00:17:59.732 "data_size": 63488 00:17:59.732 }, 00:17:59.732 { 00:17:59.732 "name": "pt3", 00:17:59.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.732 "is_configured": true, 00:17:59.732 "data_offset": 2048, 00:17:59.732 "data_size": 63488 00:17:59.732 }, 00:17:59.732 { 00:17:59.732 "name": "pt4", 00:17:59.732 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.732 "is_configured": true, 00:17:59.732 "data_offset": 2048, 00:17:59.732 "data_size": 63488 00:17:59.732 } 00:17:59.732 ] 00:17:59.732 }' 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.732 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.301 [2024-11-05 03:29:13.805047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.301 [2024-11-05 03:29:13.805086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.301 [2024-11-05 03:29:13.805178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.301 [2024-11-05 03:29:13.805289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.301 [2024-11-05 03:29:13.805307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.301 [2024-11-05 03:29:13.885039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.301 [2024-11-05 03:29:13.885275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.301 [2024-11-05 03:29:13.885340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:00.301 [2024-11-05 03:29:13.885359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.301 [2024-11-05 03:29:13.888293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.301 [2024-11-05 03:29:13.888477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.301 [2024-11-05 03:29:13.888596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.301 [2024-11-05 03:29:13.888657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.301 pt2 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.301 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.302 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.302 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.302 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.302 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.302 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.561 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.561 "name": "raid_bdev1", 00:18:00.561 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:18:00.561 "strip_size_kb": 64, 00:18:00.561 "state": "configuring", 00:18:00.561 "raid_level": "raid5f", 00:18:00.561 "superblock": true, 00:18:00.561 "num_base_bdevs": 4, 00:18:00.561 "num_base_bdevs_discovered": 1, 00:18:00.561 "num_base_bdevs_operational": 3, 00:18:00.561 "base_bdevs_list": [ 00:18:00.561 { 00:18:00.561 "name": null, 00:18:00.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.561 "is_configured": false, 00:18:00.561 "data_offset": 2048, 00:18:00.561 "data_size": 63488 00:18:00.561 }, 00:18:00.561 { 00:18:00.561 "name": "pt2", 00:18:00.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.561 "is_configured": true, 00:18:00.561 "data_offset": 2048, 00:18:00.561 "data_size": 63488 00:18:00.561 }, 00:18:00.561 { 00:18:00.561 "name": null, 00:18:00.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.561 "is_configured": false, 00:18:00.561 "data_offset": 2048, 00:18:00.561 "data_size": 63488 00:18:00.561 }, 00:18:00.561 { 00:18:00.561 "name": null, 00:18:00.561 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.561 "is_configured": false, 00:18:00.561 "data_offset": 2048, 00:18:00.561 "data_size": 63488 00:18:00.561 } 00:18:00.561 ] 00:18:00.561 }' 00:18:00.561 03:29:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.561 03:29:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 [2024-11-05 03:29:14.397284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:00.820 [2024-11-05 03:29:14.397390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.820 [2024-11-05 03:29:14.397426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:00.820 [2024-11-05 03:29:14.397442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.820 [2024-11-05 03:29:14.398010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.820 [2024-11-05 03:29:14.398045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:00.820 [2024-11-05 03:29:14.398163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:00.820 [2024-11-05 03:29:14.398203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:00.820 pt3 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.820 "name": "raid_bdev1", 00:18:00.820 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:18:00.820 "strip_size_kb": 64, 00:18:00.820 "state": "configuring", 00:18:00.820 "raid_level": "raid5f", 00:18:00.820 "superblock": true, 00:18:00.820 "num_base_bdevs": 4, 00:18:00.820 "num_base_bdevs_discovered": 2, 00:18:00.820 "num_base_bdevs_operational": 3, 00:18:00.820 "base_bdevs_list": [ 00:18:00.820 { 00:18:00.820 "name": null, 00:18:00.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.820 "is_configured": false, 00:18:00.820 "data_offset": 2048, 00:18:00.820 "data_size": 63488 00:18:00.820 }, 00:18:00.820 { 00:18:00.820 "name": "pt2", 00:18:00.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.820 "is_configured": true, 00:18:00.820 "data_offset": 2048, 00:18:00.820 "data_size": 63488 00:18:00.820 }, 00:18:00.820 { 00:18:00.820 "name": "pt3", 00:18:00.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.820 "is_configured": true, 00:18:00.820 "data_offset": 2048, 00:18:00.820 "data_size": 63488 00:18:00.820 }, 00:18:00.820 { 00:18:00.820 "name": null, 00:18:00.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.820 "is_configured": false, 00:18:00.820 "data_offset": 2048, 00:18:00.820 "data_size": 63488 00:18:00.820 } 00:18:00.820 ] 00:18:00.820 }' 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.820 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.387 [2024-11-05 03:29:14.917403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:01.387 [2024-11-05 03:29:14.917480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.387 [2024-11-05 03:29:14.917514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:01.387 [2024-11-05 03:29:14.917536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.387 [2024-11-05 03:29:14.918111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.387 [2024-11-05 03:29:14.918137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:01.387 [2024-11-05 03:29:14.918240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:01.387 [2024-11-05 03:29:14.918272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:01.387 [2024-11-05 03:29:14.918467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:01.387 [2024-11-05 03:29:14.918484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:01.387 [2024-11-05 03:29:14.918790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:01.387 [2024-11-05 03:29:14.925270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:01.387 [2024-11-05 03:29:14.925467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:01.387 [2024-11-05 03:29:14.925847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.387 pt4 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.387 "name": "raid_bdev1", 00:18:01.387 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:18:01.387 "strip_size_kb": 64, 00:18:01.387 "state": "online", 00:18:01.387 "raid_level": "raid5f", 00:18:01.387 "superblock": true, 00:18:01.387 "num_base_bdevs": 4, 00:18:01.387 "num_base_bdevs_discovered": 3, 00:18:01.387 "num_base_bdevs_operational": 3, 00:18:01.387 "base_bdevs_list": [ 00:18:01.387 { 00:18:01.387 "name": null, 00:18:01.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.387 "is_configured": false, 00:18:01.387 "data_offset": 2048, 00:18:01.387 "data_size": 63488 00:18:01.387 }, 00:18:01.387 { 00:18:01.387 "name": "pt2", 00:18:01.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.387 "is_configured": true, 00:18:01.387 "data_offset": 2048, 00:18:01.387 "data_size": 63488 00:18:01.387 }, 00:18:01.387 { 00:18:01.387 "name": "pt3", 00:18:01.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.387 "is_configured": true, 00:18:01.387 "data_offset": 2048, 00:18:01.387 "data_size": 63488 00:18:01.387 }, 00:18:01.387 { 00:18:01.387 "name": "pt4", 00:18:01.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.387 "is_configured": true, 00:18:01.387 "data_offset": 2048, 00:18:01.387 "data_size": 63488 00:18:01.387 } 00:18:01.387 ] 00:18:01.387 }' 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.387 03:29:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.956 [2024-11-05 03:29:15.409226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.956 [2024-11-05 03:29:15.409435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.956 [2024-11-05 03:29:15.409554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.956 [2024-11-05 03:29:15.409651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.956 [2024-11-05 03:29:15.409673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.956 [2024-11-05 03:29:15.481227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.956 [2024-11-05 03:29:15.481467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.956 [2024-11-05 03:29:15.481514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:01.956 [2024-11-05 03:29:15.481534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.956 [2024-11-05 03:29:15.484462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.956 [2024-11-05 03:29:15.484646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.956 [2024-11-05 03:29:15.484765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:01.956 [2024-11-05 03:29:15.484839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.956 [2024-11-05 03:29:15.485009] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:01.956 [2024-11-05 03:29:15.485034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.956 [2024-11-05 03:29:15.485055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:01.956 [2024-11-05 03:29:15.485129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.956 [2024-11-05 03:29:15.485279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.956 pt1 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.956 "name": "raid_bdev1", 00:18:01.956 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:18:01.956 "strip_size_kb": 64, 00:18:01.956 "state": "configuring", 00:18:01.956 "raid_level": "raid5f", 00:18:01.956 "superblock": true, 00:18:01.956 "num_base_bdevs": 4, 00:18:01.956 "num_base_bdevs_discovered": 2, 00:18:01.956 "num_base_bdevs_operational": 3, 00:18:01.956 "base_bdevs_list": [ 00:18:01.956 { 00:18:01.956 "name": null, 00:18:01.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.956 "is_configured": false, 00:18:01.956 "data_offset": 2048, 00:18:01.956 "data_size": 63488 00:18:01.956 }, 00:18:01.956 { 00:18:01.956 "name": "pt2", 00:18:01.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.956 "is_configured": true, 00:18:01.956 "data_offset": 2048, 00:18:01.956 "data_size": 63488 00:18:01.956 }, 00:18:01.956 { 00:18:01.956 "name": "pt3", 00:18:01.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.956 "is_configured": true, 00:18:01.956 "data_offset": 2048, 00:18:01.956 "data_size": 63488 00:18:01.956 }, 00:18:01.956 { 00:18:01.956 "name": null, 00:18:01.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.956 "is_configured": false, 00:18:01.956 "data_offset": 2048, 00:18:01.956 "data_size": 63488 00:18:01.956 } 00:18:01.956 ] 00:18:01.956 }' 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.956 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:02.525 03:29:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:02.525 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.525 03:29:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 [2024-11-05 03:29:16.037534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:02.525 [2024-11-05 03:29:16.037609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.525 [2024-11-05 03:29:16.037646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:02.525 [2024-11-05 03:29:16.037662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.525 [2024-11-05 03:29:16.038240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.525 [2024-11-05 03:29:16.038273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:02.525 [2024-11-05 03:29:16.038403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:02.525 [2024-11-05 03:29:16.038445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:02.525 [2024-11-05 03:29:16.038622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:02.525 [2024-11-05 03:29:16.038639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:02.525 [2024-11-05 03:29:16.038952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:02.525 [2024-11-05 03:29:16.045396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:02.525 pt4 00:18:02.525 [2024-11-05 03:29:16.045572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:02.525 [2024-11-05 03:29:16.045935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.525 "name": "raid_bdev1", 00:18:02.525 "uuid": "ddd88b02-9397-4155-9d4f-a5b757efdd96", 00:18:02.525 "strip_size_kb": 64, 00:18:02.525 "state": "online", 00:18:02.525 "raid_level": "raid5f", 00:18:02.525 "superblock": true, 00:18:02.525 "num_base_bdevs": 4, 00:18:02.525 "num_base_bdevs_discovered": 3, 00:18:02.525 "num_base_bdevs_operational": 3, 00:18:02.525 "base_bdevs_list": [ 00:18:02.525 { 00:18:02.525 "name": null, 00:18:02.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.525 "is_configured": false, 00:18:02.525 "data_offset": 2048, 00:18:02.525 "data_size": 63488 00:18:02.525 }, 00:18:02.525 { 00:18:02.525 "name": "pt2", 00:18:02.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.525 "is_configured": true, 00:18:02.525 "data_offset": 2048, 00:18:02.525 "data_size": 63488 00:18:02.525 }, 00:18:02.525 { 00:18:02.525 "name": "pt3", 00:18:02.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.525 "is_configured": true, 00:18:02.525 "data_offset": 2048, 00:18:02.525 "data_size": 63488 00:18:02.525 }, 00:18:02.525 { 00:18:02.525 "name": "pt4", 00:18:02.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.525 "is_configured": true, 00:18:02.525 "data_offset": 2048, 00:18:02.525 "data_size": 63488 00:18:02.525 } 00:18:02.525 ] 00:18:02.525 }' 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.525 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.093 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.094 [2024-11-05 03:29:16.609653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ddd88b02-9397-4155-9d4f-a5b757efdd96 '!=' ddd88b02-9397-4155-9d4f-a5b757efdd96 ']' 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84283 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84283 ']' 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84283 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84283 00:18:03.094 killing process with pid 84283 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84283' 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84283 00:18:03.094 [2024-11-05 03:29:16.685323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.094 03:29:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84283 00:18:03.094 [2024-11-05 03:29:16.685456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.094 [2024-11-05 03:29:16.685555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.094 [2024-11-05 03:29:16.685589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:03.662 [2024-11-05 03:29:17.038621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.600 ************************************ 00:18:04.600 END TEST raid5f_superblock_test 00:18:04.600 ************************************ 00:18:04.600 03:29:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:04.600 00:18:04.600 real 0m9.496s 00:18:04.600 user 0m15.611s 00:18:04.600 sys 0m1.381s 00:18:04.600 03:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:04.600 03:29:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.600 03:29:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:04.600 03:29:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:04.600 03:29:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:04.600 03:29:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:04.600 03:29:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.600 ************************************ 00:18:04.600 START TEST raid5f_rebuild_test 00:18:04.600 ************************************ 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84774 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84774 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84774 ']' 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:04.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:04.600 03:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.600 [2024-11-05 03:29:18.226618] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:18:04.600 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:04.601 Zero copy mechanism will not be used. 00:18:04.601 [2024-11-05 03:29:18.227238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84774 ] 00:18:04.859 [2024-11-05 03:29:18.408983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.118 [2024-11-05 03:29:18.538539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.118 [2024-11-05 03:29:18.742218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.118 [2024-11-05 03:29:18.742256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.727 BaseBdev1_malloc 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.727 [2024-11-05 03:29:19.251930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:05.727 [2024-11-05 03:29:19.252015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.727 [2024-11-05 03:29:19.252050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:05.727 [2024-11-05 03:29:19.252071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.727 [2024-11-05 03:29:19.254839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.727 [2024-11-05 03:29:19.254893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:05.727 BaseBdev1 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.727 BaseBdev2_malloc 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.727 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.727 [2024-11-05 03:29:19.300918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:05.727 [2024-11-05 03:29:19.301134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.727 [2024-11-05 03:29:19.301176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:05.727 [2024-11-05 03:29:19.301207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.728 [2024-11-05 03:29:19.303943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.728 [2024-11-05 03:29:19.303995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:05.728 BaseBdev2 00:18:05.728 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.728 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.728 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:05.728 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.728 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 BaseBdev3_malloc 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 [2024-11-05 03:29:19.359460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:06.007 [2024-11-05 03:29:19.359532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.007 [2024-11-05 03:29:19.359572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:06.007 [2024-11-05 03:29:19.359593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.007 [2024-11-05 03:29:19.362292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.007 [2024-11-05 03:29:19.362362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:06.007 BaseBdev3 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 BaseBdev4_malloc 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 [2024-11-05 03:29:19.407588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:06.007 [2024-11-05 03:29:19.407813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.007 [2024-11-05 03:29:19.407854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:06.007 [2024-11-05 03:29:19.407875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.007 [2024-11-05 03:29:19.410588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.007 [2024-11-05 03:29:19.410645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:06.007 BaseBdev4 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 spare_malloc 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 spare_delay 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 [2024-11-05 03:29:19.463915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.007 [2024-11-05 03:29:19.463993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.007 [2024-11-05 03:29:19.464025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:06.007 [2024-11-05 03:29:19.464044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.007 [2024-11-05 03:29:19.466839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.007 [2024-11-05 03:29:19.466894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.007 spare 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.007 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 [2024-11-05 03:29:19.471970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.007 [2024-11-05 03:29:19.474515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.007 [2024-11-05 03:29:19.474607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.007 [2024-11-05 03:29:19.474691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:06.007 [2024-11-05 03:29:19.474821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.007 [2024-11-05 03:29:19.474844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:06.008 [2024-11-05 03:29:19.475158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:06.008 [2024-11-05 03:29:19.482059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.008 [2024-11-05 03:29:19.482201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.008 [2024-11-05 03:29:19.482617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.008 "name": "raid_bdev1", 00:18:06.008 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:06.008 "strip_size_kb": 64, 00:18:06.008 "state": "online", 00:18:06.008 "raid_level": "raid5f", 00:18:06.008 "superblock": false, 00:18:06.008 "num_base_bdevs": 4, 00:18:06.008 "num_base_bdevs_discovered": 4, 00:18:06.008 "num_base_bdevs_operational": 4, 00:18:06.008 "base_bdevs_list": [ 00:18:06.008 { 00:18:06.008 "name": "BaseBdev1", 00:18:06.008 "uuid": "9d0e6763-969b-515b-87f4-5873c14bca2f", 00:18:06.008 "is_configured": true, 00:18:06.008 "data_offset": 0, 00:18:06.008 "data_size": 65536 00:18:06.008 }, 00:18:06.008 { 00:18:06.008 "name": "BaseBdev2", 00:18:06.008 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:06.008 "is_configured": true, 00:18:06.008 "data_offset": 0, 00:18:06.008 "data_size": 65536 00:18:06.008 }, 00:18:06.008 { 00:18:06.008 "name": "BaseBdev3", 00:18:06.008 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:06.008 "is_configured": true, 00:18:06.008 "data_offset": 0, 00:18:06.008 "data_size": 65536 00:18:06.008 }, 00:18:06.008 { 00:18:06.008 "name": "BaseBdev4", 00:18:06.008 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:06.008 "is_configured": true, 00:18:06.008 "data_offset": 0, 00:18:06.008 "data_size": 65536 00:18:06.008 } 00:18:06.008 ] 00:18:06.008 }' 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.008 03:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.575 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:06.575 03:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.575 [2024-11-05 03:29:20.010482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:06.575 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:06.834 [2024-11-05 03:29:20.410375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:06.834 /dev/nbd0 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.834 1+0 records in 00:18:06.834 1+0 records out 00:18:06.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343626 s, 11.9 MB/s 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:06.834 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:07.093 03:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:07.660 512+0 records in 00:18:07.660 512+0 records out 00:18:07.660 100663296 bytes (101 MB, 96 MiB) copied, 0.70062 s, 144 MB/s 00:18:07.660 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:07.660 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.660 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:07.660 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.660 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:07.660 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.660 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:07.919 [2024-11-05 03:29:21.455945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.919 [2024-11-05 03:29:21.463584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.919 "name": "raid_bdev1", 00:18:07.919 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:07.919 "strip_size_kb": 64, 00:18:07.919 "state": "online", 00:18:07.919 "raid_level": "raid5f", 00:18:07.919 "superblock": false, 00:18:07.919 "num_base_bdevs": 4, 00:18:07.919 "num_base_bdevs_discovered": 3, 00:18:07.919 "num_base_bdevs_operational": 3, 00:18:07.919 "base_bdevs_list": [ 00:18:07.919 { 00:18:07.919 "name": null, 00:18:07.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.919 "is_configured": false, 00:18:07.919 "data_offset": 0, 00:18:07.919 "data_size": 65536 00:18:07.919 }, 00:18:07.919 { 00:18:07.919 "name": "BaseBdev2", 00:18:07.919 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:07.919 "is_configured": true, 00:18:07.919 "data_offset": 0, 00:18:07.919 "data_size": 65536 00:18:07.919 }, 00:18:07.919 { 00:18:07.919 "name": "BaseBdev3", 00:18:07.919 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:07.919 "is_configured": true, 00:18:07.919 "data_offset": 0, 00:18:07.919 "data_size": 65536 00:18:07.919 }, 00:18:07.919 { 00:18:07.919 "name": "BaseBdev4", 00:18:07.919 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:07.919 "is_configured": true, 00:18:07.919 "data_offset": 0, 00:18:07.919 "data_size": 65536 00:18:07.919 } 00:18:07.919 ] 00:18:07.919 }' 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.919 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.487 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.487 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 [2024-11-05 03:29:21.975745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.487 [2024-11-05 03:29:21.990168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:08.487 03:29:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.487 03:29:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:08.487 [2024-11-05 03:29:21.999414] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.423 03:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.424 03:29:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.424 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.424 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.424 "name": "raid_bdev1", 00:18:09.424 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:09.424 "strip_size_kb": 64, 00:18:09.424 "state": "online", 00:18:09.424 "raid_level": "raid5f", 00:18:09.424 "superblock": false, 00:18:09.424 "num_base_bdevs": 4, 00:18:09.424 "num_base_bdevs_discovered": 4, 00:18:09.424 "num_base_bdevs_operational": 4, 00:18:09.424 "process": { 00:18:09.424 "type": "rebuild", 00:18:09.424 "target": "spare", 00:18:09.424 "progress": { 00:18:09.424 "blocks": 17280, 00:18:09.424 "percent": 8 00:18:09.424 } 00:18:09.424 }, 00:18:09.424 "base_bdevs_list": [ 00:18:09.424 { 00:18:09.424 "name": "spare", 00:18:09.424 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:09.424 "is_configured": true, 00:18:09.424 "data_offset": 0, 00:18:09.424 "data_size": 65536 00:18:09.424 }, 00:18:09.424 { 00:18:09.424 "name": "BaseBdev2", 00:18:09.424 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:09.424 "is_configured": true, 00:18:09.424 "data_offset": 0, 00:18:09.424 "data_size": 65536 00:18:09.424 }, 00:18:09.424 { 00:18:09.424 "name": "BaseBdev3", 00:18:09.424 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:09.424 "is_configured": true, 00:18:09.424 "data_offset": 0, 00:18:09.424 "data_size": 65536 00:18:09.424 }, 00:18:09.424 { 00:18:09.424 "name": "BaseBdev4", 00:18:09.424 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:09.424 "is_configured": true, 00:18:09.424 "data_offset": 0, 00:18:09.424 "data_size": 65536 00:18:09.424 } 00:18:09.424 ] 00:18:09.424 }' 00:18:09.424 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.683 [2024-11-05 03:29:23.164669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.683 [2024-11-05 03:29:23.210664] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.683 [2024-11-05 03:29:23.210919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.683 [2024-11-05 03:29:23.210953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.683 [2024-11-05 03:29:23.210973] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.683 "name": "raid_bdev1", 00:18:09.683 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:09.683 "strip_size_kb": 64, 00:18:09.683 "state": "online", 00:18:09.683 "raid_level": "raid5f", 00:18:09.683 "superblock": false, 00:18:09.683 "num_base_bdevs": 4, 00:18:09.683 "num_base_bdevs_discovered": 3, 00:18:09.683 "num_base_bdevs_operational": 3, 00:18:09.683 "base_bdevs_list": [ 00:18:09.683 { 00:18:09.683 "name": null, 00:18:09.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.683 "is_configured": false, 00:18:09.683 "data_offset": 0, 00:18:09.683 "data_size": 65536 00:18:09.683 }, 00:18:09.683 { 00:18:09.683 "name": "BaseBdev2", 00:18:09.683 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:09.683 "is_configured": true, 00:18:09.683 "data_offset": 0, 00:18:09.683 "data_size": 65536 00:18:09.683 }, 00:18:09.683 { 00:18:09.683 "name": "BaseBdev3", 00:18:09.683 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:09.683 "is_configured": true, 00:18:09.683 "data_offset": 0, 00:18:09.683 "data_size": 65536 00:18:09.683 }, 00:18:09.683 { 00:18:09.683 "name": "BaseBdev4", 00:18:09.683 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:09.683 "is_configured": true, 00:18:09.683 "data_offset": 0, 00:18:09.683 "data_size": 65536 00:18:09.683 } 00:18:09.683 ] 00:18:09.683 }' 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.683 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.251 "name": "raid_bdev1", 00:18:10.251 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:10.251 "strip_size_kb": 64, 00:18:10.251 "state": "online", 00:18:10.251 "raid_level": "raid5f", 00:18:10.251 "superblock": false, 00:18:10.251 "num_base_bdevs": 4, 00:18:10.251 "num_base_bdevs_discovered": 3, 00:18:10.251 "num_base_bdevs_operational": 3, 00:18:10.251 "base_bdevs_list": [ 00:18:10.251 { 00:18:10.251 "name": null, 00:18:10.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.251 "is_configured": false, 00:18:10.251 "data_offset": 0, 00:18:10.251 "data_size": 65536 00:18:10.251 }, 00:18:10.251 { 00:18:10.251 "name": "BaseBdev2", 00:18:10.251 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:10.251 "is_configured": true, 00:18:10.251 "data_offset": 0, 00:18:10.251 "data_size": 65536 00:18:10.251 }, 00:18:10.251 { 00:18:10.251 "name": "BaseBdev3", 00:18:10.251 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:10.251 "is_configured": true, 00:18:10.251 "data_offset": 0, 00:18:10.251 "data_size": 65536 00:18:10.251 }, 00:18:10.251 { 00:18:10.251 "name": "BaseBdev4", 00:18:10.251 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:10.251 "is_configured": true, 00:18:10.251 "data_offset": 0, 00:18:10.251 "data_size": 65536 00:18:10.251 } 00:18:10.251 ] 00:18:10.251 }' 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.251 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.511 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.511 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.511 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.511 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.511 [2024-11-05 03:29:23.926323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.511 [2024-11-05 03:29:23.939985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:10.511 03:29:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.511 03:29:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:10.511 [2024-11-05 03:29:23.949259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.449 03:29:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.449 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.449 "name": "raid_bdev1", 00:18:11.449 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:11.449 "strip_size_kb": 64, 00:18:11.449 "state": "online", 00:18:11.449 "raid_level": "raid5f", 00:18:11.449 "superblock": false, 00:18:11.449 "num_base_bdevs": 4, 00:18:11.449 "num_base_bdevs_discovered": 4, 00:18:11.449 "num_base_bdevs_operational": 4, 00:18:11.449 "process": { 00:18:11.449 "type": "rebuild", 00:18:11.449 "target": "spare", 00:18:11.449 "progress": { 00:18:11.449 "blocks": 17280, 00:18:11.449 "percent": 8 00:18:11.449 } 00:18:11.449 }, 00:18:11.449 "base_bdevs_list": [ 00:18:11.449 { 00:18:11.449 "name": "spare", 00:18:11.449 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:11.449 "is_configured": true, 00:18:11.449 "data_offset": 0, 00:18:11.449 "data_size": 65536 00:18:11.449 }, 00:18:11.449 { 00:18:11.449 "name": "BaseBdev2", 00:18:11.449 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:11.449 "is_configured": true, 00:18:11.449 "data_offset": 0, 00:18:11.449 "data_size": 65536 00:18:11.449 }, 00:18:11.449 { 00:18:11.449 "name": "BaseBdev3", 00:18:11.449 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:11.449 "is_configured": true, 00:18:11.449 "data_offset": 0, 00:18:11.449 "data_size": 65536 00:18:11.449 }, 00:18:11.449 { 00:18:11.449 "name": "BaseBdev4", 00:18:11.449 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:11.449 "is_configured": true, 00:18:11.449 "data_offset": 0, 00:18:11.449 "data_size": 65536 00:18:11.449 } 00:18:11.449 ] 00:18:11.449 }' 00:18:11.449 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.449 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.449 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=667 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.708 "name": "raid_bdev1", 00:18:11.708 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:11.708 "strip_size_kb": 64, 00:18:11.708 "state": "online", 00:18:11.708 "raid_level": "raid5f", 00:18:11.708 "superblock": false, 00:18:11.708 "num_base_bdevs": 4, 00:18:11.708 "num_base_bdevs_discovered": 4, 00:18:11.708 "num_base_bdevs_operational": 4, 00:18:11.708 "process": { 00:18:11.708 "type": "rebuild", 00:18:11.708 "target": "spare", 00:18:11.708 "progress": { 00:18:11.708 "blocks": 21120, 00:18:11.708 "percent": 10 00:18:11.708 } 00:18:11.708 }, 00:18:11.708 "base_bdevs_list": [ 00:18:11.708 { 00:18:11.708 "name": "spare", 00:18:11.708 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:11.708 "is_configured": true, 00:18:11.708 "data_offset": 0, 00:18:11.708 "data_size": 65536 00:18:11.708 }, 00:18:11.708 { 00:18:11.708 "name": "BaseBdev2", 00:18:11.708 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:11.708 "is_configured": true, 00:18:11.708 "data_offset": 0, 00:18:11.708 "data_size": 65536 00:18:11.708 }, 00:18:11.708 { 00:18:11.708 "name": "BaseBdev3", 00:18:11.708 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:11.708 "is_configured": true, 00:18:11.708 "data_offset": 0, 00:18:11.708 "data_size": 65536 00:18:11.708 }, 00:18:11.708 { 00:18:11.708 "name": "BaseBdev4", 00:18:11.708 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:11.708 "is_configured": true, 00:18:11.708 "data_offset": 0, 00:18:11.708 "data_size": 65536 00:18:11.708 } 00:18:11.708 ] 00:18:11.708 }' 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.708 03:29:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.086 "name": "raid_bdev1", 00:18:13.086 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:13.086 "strip_size_kb": 64, 00:18:13.086 "state": "online", 00:18:13.086 "raid_level": "raid5f", 00:18:13.086 "superblock": false, 00:18:13.086 "num_base_bdevs": 4, 00:18:13.086 "num_base_bdevs_discovered": 4, 00:18:13.086 "num_base_bdevs_operational": 4, 00:18:13.086 "process": { 00:18:13.086 "type": "rebuild", 00:18:13.086 "target": "spare", 00:18:13.086 "progress": { 00:18:13.086 "blocks": 44160, 00:18:13.086 "percent": 22 00:18:13.086 } 00:18:13.086 }, 00:18:13.086 "base_bdevs_list": [ 00:18:13.086 { 00:18:13.086 "name": "spare", 00:18:13.086 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:13.086 "is_configured": true, 00:18:13.086 "data_offset": 0, 00:18:13.086 "data_size": 65536 00:18:13.086 }, 00:18:13.086 { 00:18:13.086 "name": "BaseBdev2", 00:18:13.086 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:13.086 "is_configured": true, 00:18:13.086 "data_offset": 0, 00:18:13.086 "data_size": 65536 00:18:13.086 }, 00:18:13.086 { 00:18:13.086 "name": "BaseBdev3", 00:18:13.086 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:13.086 "is_configured": true, 00:18:13.086 "data_offset": 0, 00:18:13.086 "data_size": 65536 00:18:13.086 }, 00:18:13.086 { 00:18:13.086 "name": "BaseBdev4", 00:18:13.086 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:13.086 "is_configured": true, 00:18:13.086 "data_offset": 0, 00:18:13.086 "data_size": 65536 00:18:13.086 } 00:18:13.086 ] 00:18:13.086 }' 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.086 03:29:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.022 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.022 "name": "raid_bdev1", 00:18:14.022 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:14.022 "strip_size_kb": 64, 00:18:14.022 "state": "online", 00:18:14.022 "raid_level": "raid5f", 00:18:14.022 "superblock": false, 00:18:14.022 "num_base_bdevs": 4, 00:18:14.022 "num_base_bdevs_discovered": 4, 00:18:14.022 "num_base_bdevs_operational": 4, 00:18:14.022 "process": { 00:18:14.022 "type": "rebuild", 00:18:14.022 "target": "spare", 00:18:14.022 "progress": { 00:18:14.022 "blocks": 65280, 00:18:14.022 "percent": 33 00:18:14.022 } 00:18:14.022 }, 00:18:14.022 "base_bdevs_list": [ 00:18:14.022 { 00:18:14.022 "name": "spare", 00:18:14.022 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:14.022 "is_configured": true, 00:18:14.022 "data_offset": 0, 00:18:14.022 "data_size": 65536 00:18:14.022 }, 00:18:14.022 { 00:18:14.022 "name": "BaseBdev2", 00:18:14.022 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:14.022 "is_configured": true, 00:18:14.022 "data_offset": 0, 00:18:14.022 "data_size": 65536 00:18:14.022 }, 00:18:14.022 { 00:18:14.023 "name": "BaseBdev3", 00:18:14.023 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:14.023 "is_configured": true, 00:18:14.023 "data_offset": 0, 00:18:14.023 "data_size": 65536 00:18:14.023 }, 00:18:14.023 { 00:18:14.023 "name": "BaseBdev4", 00:18:14.023 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:14.023 "is_configured": true, 00:18:14.023 "data_offset": 0, 00:18:14.023 "data_size": 65536 00:18:14.023 } 00:18:14.023 ] 00:18:14.023 }' 00:18:14.023 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.023 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.023 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.023 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.023 03:29:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.401 "name": "raid_bdev1", 00:18:15.401 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:15.401 "strip_size_kb": 64, 00:18:15.401 "state": "online", 00:18:15.401 "raid_level": "raid5f", 00:18:15.401 "superblock": false, 00:18:15.401 "num_base_bdevs": 4, 00:18:15.401 "num_base_bdevs_discovered": 4, 00:18:15.401 "num_base_bdevs_operational": 4, 00:18:15.401 "process": { 00:18:15.401 "type": "rebuild", 00:18:15.401 "target": "spare", 00:18:15.401 "progress": { 00:18:15.401 "blocks": 88320, 00:18:15.401 "percent": 44 00:18:15.401 } 00:18:15.401 }, 00:18:15.401 "base_bdevs_list": [ 00:18:15.401 { 00:18:15.401 "name": "spare", 00:18:15.401 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:15.401 "is_configured": true, 00:18:15.401 "data_offset": 0, 00:18:15.401 "data_size": 65536 00:18:15.401 }, 00:18:15.401 { 00:18:15.401 "name": "BaseBdev2", 00:18:15.401 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:15.401 "is_configured": true, 00:18:15.401 "data_offset": 0, 00:18:15.401 "data_size": 65536 00:18:15.401 }, 00:18:15.401 { 00:18:15.401 "name": "BaseBdev3", 00:18:15.401 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:15.401 "is_configured": true, 00:18:15.401 "data_offset": 0, 00:18:15.401 "data_size": 65536 00:18:15.401 }, 00:18:15.401 { 00:18:15.401 "name": "BaseBdev4", 00:18:15.401 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:15.401 "is_configured": true, 00:18:15.401 "data_offset": 0, 00:18:15.401 "data_size": 65536 00:18:15.401 } 00:18:15.401 ] 00:18:15.401 }' 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.401 03:29:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.340 "name": "raid_bdev1", 00:18:16.340 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:16.340 "strip_size_kb": 64, 00:18:16.340 "state": "online", 00:18:16.340 "raid_level": "raid5f", 00:18:16.340 "superblock": false, 00:18:16.340 "num_base_bdevs": 4, 00:18:16.340 "num_base_bdevs_discovered": 4, 00:18:16.340 "num_base_bdevs_operational": 4, 00:18:16.340 "process": { 00:18:16.340 "type": "rebuild", 00:18:16.340 "target": "spare", 00:18:16.340 "progress": { 00:18:16.340 "blocks": 109440, 00:18:16.340 "percent": 55 00:18:16.340 } 00:18:16.340 }, 00:18:16.340 "base_bdevs_list": [ 00:18:16.340 { 00:18:16.340 "name": "spare", 00:18:16.340 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:16.340 "is_configured": true, 00:18:16.340 "data_offset": 0, 00:18:16.340 "data_size": 65536 00:18:16.340 }, 00:18:16.340 { 00:18:16.340 "name": "BaseBdev2", 00:18:16.340 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:16.340 "is_configured": true, 00:18:16.340 "data_offset": 0, 00:18:16.340 "data_size": 65536 00:18:16.340 }, 00:18:16.340 { 00:18:16.340 "name": "BaseBdev3", 00:18:16.340 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:16.340 "is_configured": true, 00:18:16.340 "data_offset": 0, 00:18:16.340 "data_size": 65536 00:18:16.340 }, 00:18:16.340 { 00:18:16.340 "name": "BaseBdev4", 00:18:16.340 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:16.340 "is_configured": true, 00:18:16.340 "data_offset": 0, 00:18:16.340 "data_size": 65536 00:18:16.340 } 00:18:16.340 ] 00:18:16.340 }' 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.340 03:29:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.718 03:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.718 03:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.718 "name": "raid_bdev1", 00:18:17.718 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:17.718 "strip_size_kb": 64, 00:18:17.718 "state": "online", 00:18:17.718 "raid_level": "raid5f", 00:18:17.718 "superblock": false, 00:18:17.718 "num_base_bdevs": 4, 00:18:17.718 "num_base_bdevs_discovered": 4, 00:18:17.718 "num_base_bdevs_operational": 4, 00:18:17.718 "process": { 00:18:17.718 "type": "rebuild", 00:18:17.718 "target": "spare", 00:18:17.718 "progress": { 00:18:17.718 "blocks": 132480, 00:18:17.718 "percent": 67 00:18:17.718 } 00:18:17.718 }, 00:18:17.718 "base_bdevs_list": [ 00:18:17.718 { 00:18:17.718 "name": "spare", 00:18:17.718 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 0, 00:18:17.718 "data_size": 65536 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev2", 00:18:17.718 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 0, 00:18:17.718 "data_size": 65536 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev3", 00:18:17.718 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 0, 00:18:17.718 "data_size": 65536 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev4", 00:18:17.718 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 0, 00:18:17.718 "data_size": 65536 00:18:17.718 } 00:18:17.718 ] 00:18:17.718 }' 00:18:17.718 03:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.718 03:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.718 03:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.718 03:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.718 03:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.655 "name": "raid_bdev1", 00:18:18.655 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:18.655 "strip_size_kb": 64, 00:18:18.655 "state": "online", 00:18:18.655 "raid_level": "raid5f", 00:18:18.655 "superblock": false, 00:18:18.655 "num_base_bdevs": 4, 00:18:18.655 "num_base_bdevs_discovered": 4, 00:18:18.655 "num_base_bdevs_operational": 4, 00:18:18.655 "process": { 00:18:18.655 "type": "rebuild", 00:18:18.655 "target": "spare", 00:18:18.655 "progress": { 00:18:18.655 "blocks": 155520, 00:18:18.655 "percent": 79 00:18:18.655 } 00:18:18.655 }, 00:18:18.655 "base_bdevs_list": [ 00:18:18.655 { 00:18:18.655 "name": "spare", 00:18:18.655 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:18.655 "is_configured": true, 00:18:18.655 "data_offset": 0, 00:18:18.655 "data_size": 65536 00:18:18.655 }, 00:18:18.655 { 00:18:18.655 "name": "BaseBdev2", 00:18:18.655 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:18.655 "is_configured": true, 00:18:18.655 "data_offset": 0, 00:18:18.655 "data_size": 65536 00:18:18.655 }, 00:18:18.655 { 00:18:18.655 "name": "BaseBdev3", 00:18:18.655 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:18.655 "is_configured": true, 00:18:18.655 "data_offset": 0, 00:18:18.655 "data_size": 65536 00:18:18.655 }, 00:18:18.655 { 00:18:18.655 "name": "BaseBdev4", 00:18:18.655 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:18.655 "is_configured": true, 00:18:18.655 "data_offset": 0, 00:18:18.655 "data_size": 65536 00:18:18.655 } 00:18:18.655 ] 00:18:18.655 }' 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.655 03:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.032 "name": "raid_bdev1", 00:18:20.032 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:20.032 "strip_size_kb": 64, 00:18:20.032 "state": "online", 00:18:20.032 "raid_level": "raid5f", 00:18:20.032 "superblock": false, 00:18:20.032 "num_base_bdevs": 4, 00:18:20.032 "num_base_bdevs_discovered": 4, 00:18:20.032 "num_base_bdevs_operational": 4, 00:18:20.032 "process": { 00:18:20.032 "type": "rebuild", 00:18:20.032 "target": "spare", 00:18:20.032 "progress": { 00:18:20.032 "blocks": 176640, 00:18:20.032 "percent": 89 00:18:20.032 } 00:18:20.032 }, 00:18:20.032 "base_bdevs_list": [ 00:18:20.032 { 00:18:20.032 "name": "spare", 00:18:20.032 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:20.032 "is_configured": true, 00:18:20.032 "data_offset": 0, 00:18:20.032 "data_size": 65536 00:18:20.032 }, 00:18:20.032 { 00:18:20.032 "name": "BaseBdev2", 00:18:20.032 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:20.032 "is_configured": true, 00:18:20.032 "data_offset": 0, 00:18:20.032 "data_size": 65536 00:18:20.032 }, 00:18:20.032 { 00:18:20.032 "name": "BaseBdev3", 00:18:20.032 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:20.032 "is_configured": true, 00:18:20.032 "data_offset": 0, 00:18:20.032 "data_size": 65536 00:18:20.032 }, 00:18:20.032 { 00:18:20.032 "name": "BaseBdev4", 00:18:20.032 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:20.032 "is_configured": true, 00:18:20.032 "data_offset": 0, 00:18:20.032 "data_size": 65536 00:18:20.032 } 00:18:20.032 ] 00:18:20.032 }' 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.032 03:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.005 [2024-11-05 03:29:34.343486] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:21.005 [2024-11-05 03:29:34.343589] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:21.005 [2024-11-05 03:29:34.343659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.005 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.005 "name": "raid_bdev1", 00:18:21.005 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:21.005 "strip_size_kb": 64, 00:18:21.005 "state": "online", 00:18:21.005 "raid_level": "raid5f", 00:18:21.005 "superblock": false, 00:18:21.005 "num_base_bdevs": 4, 00:18:21.005 "num_base_bdevs_discovered": 4, 00:18:21.005 "num_base_bdevs_operational": 4, 00:18:21.006 "base_bdevs_list": [ 00:18:21.006 { 00:18:21.006 "name": "spare", 00:18:21.006 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 }, 00:18:21.006 { 00:18:21.006 "name": "BaseBdev2", 00:18:21.006 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 }, 00:18:21.006 { 00:18:21.006 "name": "BaseBdev3", 00:18:21.006 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 }, 00:18:21.006 { 00:18:21.006 "name": "BaseBdev4", 00:18:21.006 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 } 00:18:21.006 ] 00:18:21.006 }' 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.006 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.006 "name": "raid_bdev1", 00:18:21.006 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:21.006 "strip_size_kb": 64, 00:18:21.006 "state": "online", 00:18:21.006 "raid_level": "raid5f", 00:18:21.006 "superblock": false, 00:18:21.006 "num_base_bdevs": 4, 00:18:21.006 "num_base_bdevs_discovered": 4, 00:18:21.006 "num_base_bdevs_operational": 4, 00:18:21.006 "base_bdevs_list": [ 00:18:21.006 { 00:18:21.006 "name": "spare", 00:18:21.006 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 }, 00:18:21.006 { 00:18:21.006 "name": "BaseBdev2", 00:18:21.006 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 }, 00:18:21.006 { 00:18:21.006 "name": "BaseBdev3", 00:18:21.006 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 }, 00:18:21.006 { 00:18:21.006 "name": "BaseBdev4", 00:18:21.006 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:21.006 "is_configured": true, 00:18:21.006 "data_offset": 0, 00:18:21.006 "data_size": 65536 00:18:21.006 } 00:18:21.006 ] 00:18:21.006 }' 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.265 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.266 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.266 "name": "raid_bdev1", 00:18:21.266 "uuid": "01c3c41f-c1d0-447f-85a1-a84a29d37785", 00:18:21.266 "strip_size_kb": 64, 00:18:21.266 "state": "online", 00:18:21.266 "raid_level": "raid5f", 00:18:21.266 "superblock": false, 00:18:21.266 "num_base_bdevs": 4, 00:18:21.266 "num_base_bdevs_discovered": 4, 00:18:21.266 "num_base_bdevs_operational": 4, 00:18:21.266 "base_bdevs_list": [ 00:18:21.266 { 00:18:21.266 "name": "spare", 00:18:21.266 "uuid": "afb07c0c-0cef-5190-abf6-25bce126b29a", 00:18:21.266 "is_configured": true, 00:18:21.266 "data_offset": 0, 00:18:21.266 "data_size": 65536 00:18:21.266 }, 00:18:21.266 { 00:18:21.266 "name": "BaseBdev2", 00:18:21.266 "uuid": "a7dc19ed-c14d-539d-9aab-69a244bfdcf7", 00:18:21.266 "is_configured": true, 00:18:21.266 "data_offset": 0, 00:18:21.266 "data_size": 65536 00:18:21.266 }, 00:18:21.266 { 00:18:21.266 "name": "BaseBdev3", 00:18:21.266 "uuid": "0231eb6d-8039-5de7-9e5d-4463e0cf5db0", 00:18:21.266 "is_configured": true, 00:18:21.266 "data_offset": 0, 00:18:21.266 "data_size": 65536 00:18:21.266 }, 00:18:21.266 { 00:18:21.266 "name": "BaseBdev4", 00:18:21.266 "uuid": "65ea3c5e-447e-5046-b0ca-e4fe80439438", 00:18:21.266 "is_configured": true, 00:18:21.266 "data_offset": 0, 00:18:21.266 "data_size": 65536 00:18:21.266 } 00:18:21.266 ] 00:18:21.266 }' 00:18:21.266 03:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.266 03:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.833 [2024-11-05 03:29:35.246248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.833 [2024-11-05 03:29:35.246318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.833 [2024-11-05 03:29:35.246447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.833 [2024-11-05 03:29:35.246570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.833 [2024-11-05 03:29:35.246590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.833 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:22.092 /dev/nbd0 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.092 1+0 records in 00:18:22.092 1+0 records out 00:18:22.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025732 s, 15.9 MB/s 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.092 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:22.351 /dev/nbd1 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.351 1+0 records in 00:18:22.351 1+0 records out 00:18:22.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414444 s, 9.9 MB/s 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.351 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:22.352 03:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:22.352 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.352 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.352 03:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:22.610 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:22.610 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:22.610 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:22.610 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:22.610 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:22.610 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.610 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.869 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84774 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84774 ']' 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84774 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84774 00:18:23.437 killing process with pid 84774 00:18:23.437 Received shutdown signal, test time was about 60.000000 seconds 00:18:23.437 00:18:23.437 Latency(us) 00:18:23.437 [2024-11-05T03:29:37.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.437 [2024-11-05T03:29:37.076Z] =================================================================================================================== 00:18:23.437 [2024-11-05T03:29:37.076Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84774' 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84774 00:18:23.437 [2024-11-05 03:29:36.853954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.437 03:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84774 00:18:23.696 [2024-11-05 03:29:37.280648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:25.073 00:18:25.073 real 0m20.185s 00:18:25.073 user 0m25.181s 00:18:25.073 sys 0m2.332s 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.073 ************************************ 00:18:25.073 END TEST raid5f_rebuild_test 00:18:25.073 ************************************ 00:18:25.073 03:29:38 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:25.073 03:29:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:25.073 03:29:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:25.073 03:29:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.073 ************************************ 00:18:25.073 START TEST raid5f_rebuild_test_sb 00:18:25.073 ************************************ 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85283 00:18:25.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85283 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85283 ']' 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:25.073 03:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.073 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:25.073 Zero copy mechanism will not be used. 00:18:25.073 [2024-11-05 03:29:38.470627] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:18:25.073 [2024-11-05 03:29:38.470804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85283 ] 00:18:25.073 [2024-11-05 03:29:38.661796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.332 [2024-11-05 03:29:38.815411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.591 [2024-11-05 03:29:39.019086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.591 [2024-11-05 03:29:39.019146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.849 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:25.849 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:25.849 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:25.849 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:25.849 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.849 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.107 BaseBdev1_malloc 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.107 [2024-11-05 03:29:39.534821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.107 [2024-11-05 03:29:39.534918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.107 [2024-11-05 03:29:39.534948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:26.107 [2024-11-05 03:29:39.534966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.107 [2024-11-05 03:29:39.537981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.107 [2024-11-05 03:29:39.538165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.107 BaseBdev1 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.107 BaseBdev2_malloc 00:18:26.107 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.108 [2024-11-05 03:29:39.585961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:26.108 [2024-11-05 03:29:39.586039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.108 [2024-11-05 03:29:39.586069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:26.108 [2024-11-05 03:29:39.586089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.108 [2024-11-05 03:29:39.588786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.108 [2024-11-05 03:29:39.588846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:26.108 BaseBdev2 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.108 BaseBdev3_malloc 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.108 [2024-11-05 03:29:39.648071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:26.108 [2024-11-05 03:29:39.648141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.108 [2024-11-05 03:29:39.648179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:26.108 [2024-11-05 03:29:39.648197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.108 [2024-11-05 03:29:39.650925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.108 [2024-11-05 03:29:39.650979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:26.108 BaseBdev3 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.108 BaseBdev4_malloc 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.108 [2024-11-05 03:29:39.703590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:26.108 [2024-11-05 03:29:39.703665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.108 [2024-11-05 03:29:39.703695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:26.108 [2024-11-05 03:29:39.703713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.108 [2024-11-05 03:29:39.706459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.108 [2024-11-05 03:29:39.706524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:26.108 BaseBdev4 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.108 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.366 spare_malloc 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.366 spare_delay 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.366 [2024-11-05 03:29:39.763539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.366 [2024-11-05 03:29:39.763670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.366 [2024-11-05 03:29:39.763699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:26.366 [2024-11-05 03:29:39.763717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.366 [2024-11-05 03:29:39.766559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.366 [2024-11-05 03:29:39.766738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.366 spare 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.366 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.366 [2024-11-05 03:29:39.771702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.366 [2024-11-05 03:29:39.774210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.366 [2024-11-05 03:29:39.774460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.366 [2024-11-05 03:29:39.774588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:26.366 [2024-11-05 03:29:39.774912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:26.366 [2024-11-05 03:29:39.774975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:26.366 [2024-11-05 03:29:39.775420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:26.367 [2024-11-05 03:29:39.782431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:26.367 [2024-11-05 03:29:39.782565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:26.367 [2024-11-05 03:29:39.783038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.367 "name": "raid_bdev1", 00:18:26.367 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:26.367 "strip_size_kb": 64, 00:18:26.367 "state": "online", 00:18:26.367 "raid_level": "raid5f", 00:18:26.367 "superblock": true, 00:18:26.367 "num_base_bdevs": 4, 00:18:26.367 "num_base_bdevs_discovered": 4, 00:18:26.367 "num_base_bdevs_operational": 4, 00:18:26.367 "base_bdevs_list": [ 00:18:26.367 { 00:18:26.367 "name": "BaseBdev1", 00:18:26.367 "uuid": "2bbc2d56-661e-50af-82a2-7ae815ae3cfe", 00:18:26.367 "is_configured": true, 00:18:26.367 "data_offset": 2048, 00:18:26.367 "data_size": 63488 00:18:26.367 }, 00:18:26.367 { 00:18:26.367 "name": "BaseBdev2", 00:18:26.367 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:26.367 "is_configured": true, 00:18:26.367 "data_offset": 2048, 00:18:26.367 "data_size": 63488 00:18:26.367 }, 00:18:26.367 { 00:18:26.367 "name": "BaseBdev3", 00:18:26.367 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:26.367 "is_configured": true, 00:18:26.367 "data_offset": 2048, 00:18:26.367 "data_size": 63488 00:18:26.367 }, 00:18:26.367 { 00:18:26.367 "name": "BaseBdev4", 00:18:26.367 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:26.367 "is_configured": true, 00:18:26.367 "data_offset": 2048, 00:18:26.367 "data_size": 63488 00:18:26.367 } 00:18:26.367 ] 00:18:26.367 }' 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.367 03:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.933 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.934 [2024-11-05 03:29:40.319040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:26.934 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:27.193 [2024-11-05 03:29:40.698983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:27.193 /dev/nbd0 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.193 1+0 records in 00:18:27.193 1+0 records out 00:18:27.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029606 s, 13.8 MB/s 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:27.193 03:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:27.761 496+0 records in 00:18:27.761 496+0 records out 00:18:27.761 97517568 bytes (98 MB, 93 MiB) copied, 0.602771 s, 162 MB/s 00:18:27.761 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:27.761 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.761 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:27.761 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.761 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:27.761 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.761 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.329 [2024-11-05 03:29:41.676939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.329 [2024-11-05 03:29:41.688576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.329 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.329 "name": "raid_bdev1", 00:18:28.329 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:28.329 "strip_size_kb": 64, 00:18:28.329 "state": "online", 00:18:28.329 "raid_level": "raid5f", 00:18:28.329 "superblock": true, 00:18:28.329 "num_base_bdevs": 4, 00:18:28.329 "num_base_bdevs_discovered": 3, 00:18:28.329 "num_base_bdevs_operational": 3, 00:18:28.329 "base_bdevs_list": [ 00:18:28.329 { 00:18:28.329 "name": null, 00:18:28.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.329 "is_configured": false, 00:18:28.329 "data_offset": 0, 00:18:28.329 "data_size": 63488 00:18:28.329 }, 00:18:28.329 { 00:18:28.329 "name": "BaseBdev2", 00:18:28.329 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:28.329 "is_configured": true, 00:18:28.329 "data_offset": 2048, 00:18:28.329 "data_size": 63488 00:18:28.329 }, 00:18:28.329 { 00:18:28.329 "name": "BaseBdev3", 00:18:28.329 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:28.329 "is_configured": true, 00:18:28.329 "data_offset": 2048, 00:18:28.329 "data_size": 63488 00:18:28.329 }, 00:18:28.329 { 00:18:28.329 "name": "BaseBdev4", 00:18:28.329 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:28.329 "is_configured": true, 00:18:28.329 "data_offset": 2048, 00:18:28.329 "data_size": 63488 00:18:28.329 } 00:18:28.329 ] 00:18:28.329 }' 00:18:28.330 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.330 03:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.588 03:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.588 03:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.588 03:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.588 [2024-11-05 03:29:42.216781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.847 [2024-11-05 03:29:42.231197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:28.847 03:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.847 03:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:28.847 [2024-11-05 03:29:42.240389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.783 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.783 "name": "raid_bdev1", 00:18:29.783 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:29.783 "strip_size_kb": 64, 00:18:29.783 "state": "online", 00:18:29.783 "raid_level": "raid5f", 00:18:29.783 "superblock": true, 00:18:29.783 "num_base_bdevs": 4, 00:18:29.783 "num_base_bdevs_discovered": 4, 00:18:29.783 "num_base_bdevs_operational": 4, 00:18:29.783 "process": { 00:18:29.783 "type": "rebuild", 00:18:29.783 "target": "spare", 00:18:29.783 "progress": { 00:18:29.783 "blocks": 17280, 00:18:29.783 "percent": 9 00:18:29.783 } 00:18:29.783 }, 00:18:29.783 "base_bdevs_list": [ 00:18:29.783 { 00:18:29.783 "name": "spare", 00:18:29.783 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.783 "data_size": 63488 00:18:29.783 }, 00:18:29.783 { 00:18:29.783 "name": "BaseBdev2", 00:18:29.783 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.783 "data_size": 63488 00:18:29.783 }, 00:18:29.783 { 00:18:29.783 "name": "BaseBdev3", 00:18:29.783 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.783 "data_size": 63488 00:18:29.783 }, 00:18:29.783 { 00:18:29.783 "name": "BaseBdev4", 00:18:29.783 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:29.783 "is_configured": true, 00:18:29.783 "data_offset": 2048, 00:18:29.784 "data_size": 63488 00:18:29.784 } 00:18:29.784 ] 00:18:29.784 }' 00:18:29.784 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.784 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.784 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.784 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.784 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:29.784 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.784 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.784 [2024-11-05 03:29:43.418237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.055 [2024-11-05 03:29:43.450400] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.055 [2024-11-05 03:29:43.450498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.055 [2024-11-05 03:29:43.450530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.055 [2024-11-05 03:29:43.450546] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.055 "name": "raid_bdev1", 00:18:30.055 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:30.055 "strip_size_kb": 64, 00:18:30.055 "state": "online", 00:18:30.055 "raid_level": "raid5f", 00:18:30.055 "superblock": true, 00:18:30.055 "num_base_bdevs": 4, 00:18:30.055 "num_base_bdevs_discovered": 3, 00:18:30.055 "num_base_bdevs_operational": 3, 00:18:30.055 "base_bdevs_list": [ 00:18:30.055 { 00:18:30.055 "name": null, 00:18:30.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.055 "is_configured": false, 00:18:30.055 "data_offset": 0, 00:18:30.055 "data_size": 63488 00:18:30.055 }, 00:18:30.055 { 00:18:30.055 "name": "BaseBdev2", 00:18:30.055 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:30.055 "is_configured": true, 00:18:30.055 "data_offset": 2048, 00:18:30.055 "data_size": 63488 00:18:30.055 }, 00:18:30.055 { 00:18:30.055 "name": "BaseBdev3", 00:18:30.055 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:30.055 "is_configured": true, 00:18:30.055 "data_offset": 2048, 00:18:30.055 "data_size": 63488 00:18:30.055 }, 00:18:30.055 { 00:18:30.055 "name": "BaseBdev4", 00:18:30.055 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:30.055 "is_configured": true, 00:18:30.055 "data_offset": 2048, 00:18:30.055 "data_size": 63488 00:18:30.055 } 00:18:30.055 ] 00:18:30.055 }' 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.055 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.632 03:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.632 "name": "raid_bdev1", 00:18:30.632 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:30.632 "strip_size_kb": 64, 00:18:30.632 "state": "online", 00:18:30.632 "raid_level": "raid5f", 00:18:30.632 "superblock": true, 00:18:30.632 "num_base_bdevs": 4, 00:18:30.632 "num_base_bdevs_discovered": 3, 00:18:30.632 "num_base_bdevs_operational": 3, 00:18:30.632 "base_bdevs_list": [ 00:18:30.632 { 00:18:30.632 "name": null, 00:18:30.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.632 "is_configured": false, 00:18:30.632 "data_offset": 0, 00:18:30.632 "data_size": 63488 00:18:30.632 }, 00:18:30.632 { 00:18:30.632 "name": "BaseBdev2", 00:18:30.632 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:30.632 "is_configured": true, 00:18:30.632 "data_offset": 2048, 00:18:30.632 "data_size": 63488 00:18:30.632 }, 00:18:30.632 { 00:18:30.632 "name": "BaseBdev3", 00:18:30.632 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:30.632 "is_configured": true, 00:18:30.632 "data_offset": 2048, 00:18:30.632 "data_size": 63488 00:18:30.632 }, 00:18:30.632 { 00:18:30.632 "name": "BaseBdev4", 00:18:30.632 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:30.632 "is_configured": true, 00:18:30.632 "data_offset": 2048, 00:18:30.632 "data_size": 63488 00:18:30.632 } 00:18:30.632 ] 00:18:30.632 }' 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.632 [2024-11-05 03:29:44.157409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.632 [2024-11-05 03:29:44.171630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.632 03:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:30.632 [2024-11-05 03:29:44.180501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.569 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.827 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.827 "name": "raid_bdev1", 00:18:31.827 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:31.827 "strip_size_kb": 64, 00:18:31.827 "state": "online", 00:18:31.827 "raid_level": "raid5f", 00:18:31.827 "superblock": true, 00:18:31.827 "num_base_bdevs": 4, 00:18:31.827 "num_base_bdevs_discovered": 4, 00:18:31.827 "num_base_bdevs_operational": 4, 00:18:31.827 "process": { 00:18:31.827 "type": "rebuild", 00:18:31.827 "target": "spare", 00:18:31.827 "progress": { 00:18:31.827 "blocks": 17280, 00:18:31.827 "percent": 9 00:18:31.827 } 00:18:31.827 }, 00:18:31.827 "base_bdevs_list": [ 00:18:31.827 { 00:18:31.827 "name": "spare", 00:18:31.827 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:31.827 "is_configured": true, 00:18:31.827 "data_offset": 2048, 00:18:31.827 "data_size": 63488 00:18:31.827 }, 00:18:31.827 { 00:18:31.827 "name": "BaseBdev2", 00:18:31.827 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:31.827 "is_configured": true, 00:18:31.827 "data_offset": 2048, 00:18:31.827 "data_size": 63488 00:18:31.827 }, 00:18:31.827 { 00:18:31.827 "name": "BaseBdev3", 00:18:31.828 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:31.828 "is_configured": true, 00:18:31.828 "data_offset": 2048, 00:18:31.828 "data_size": 63488 00:18:31.828 }, 00:18:31.828 { 00:18:31.828 "name": "BaseBdev4", 00:18:31.828 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:31.828 "is_configured": true, 00:18:31.828 "data_offset": 2048, 00:18:31.828 "data_size": 63488 00:18:31.828 } 00:18:31.828 ] 00:18:31.828 }' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:31.828 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=687 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.828 "name": "raid_bdev1", 00:18:31.828 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:31.828 "strip_size_kb": 64, 00:18:31.828 "state": "online", 00:18:31.828 "raid_level": "raid5f", 00:18:31.828 "superblock": true, 00:18:31.828 "num_base_bdevs": 4, 00:18:31.828 "num_base_bdevs_discovered": 4, 00:18:31.828 "num_base_bdevs_operational": 4, 00:18:31.828 "process": { 00:18:31.828 "type": "rebuild", 00:18:31.828 "target": "spare", 00:18:31.828 "progress": { 00:18:31.828 "blocks": 21120, 00:18:31.828 "percent": 11 00:18:31.828 } 00:18:31.828 }, 00:18:31.828 "base_bdevs_list": [ 00:18:31.828 { 00:18:31.828 "name": "spare", 00:18:31.828 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:31.828 "is_configured": true, 00:18:31.828 "data_offset": 2048, 00:18:31.828 "data_size": 63488 00:18:31.828 }, 00:18:31.828 { 00:18:31.828 "name": "BaseBdev2", 00:18:31.828 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:31.828 "is_configured": true, 00:18:31.828 "data_offset": 2048, 00:18:31.828 "data_size": 63488 00:18:31.828 }, 00:18:31.828 { 00:18:31.828 "name": "BaseBdev3", 00:18:31.828 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:31.828 "is_configured": true, 00:18:31.828 "data_offset": 2048, 00:18:31.828 "data_size": 63488 00:18:31.828 }, 00:18:31.828 { 00:18:31.828 "name": "BaseBdev4", 00:18:31.828 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:31.828 "is_configured": true, 00:18:31.828 "data_offset": 2048, 00:18:31.828 "data_size": 63488 00:18:31.828 } 00:18:31.828 ] 00:18:31.828 }' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.828 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.086 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.086 03:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.021 "name": "raid_bdev1", 00:18:33.021 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:33.021 "strip_size_kb": 64, 00:18:33.021 "state": "online", 00:18:33.021 "raid_level": "raid5f", 00:18:33.021 "superblock": true, 00:18:33.021 "num_base_bdevs": 4, 00:18:33.021 "num_base_bdevs_discovered": 4, 00:18:33.021 "num_base_bdevs_operational": 4, 00:18:33.021 "process": { 00:18:33.021 "type": "rebuild", 00:18:33.021 "target": "spare", 00:18:33.021 "progress": { 00:18:33.021 "blocks": 44160, 00:18:33.021 "percent": 23 00:18:33.021 } 00:18:33.021 }, 00:18:33.021 "base_bdevs_list": [ 00:18:33.021 { 00:18:33.021 "name": "spare", 00:18:33.021 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:33.021 "is_configured": true, 00:18:33.021 "data_offset": 2048, 00:18:33.021 "data_size": 63488 00:18:33.021 }, 00:18:33.021 { 00:18:33.021 "name": "BaseBdev2", 00:18:33.021 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:33.021 "is_configured": true, 00:18:33.021 "data_offset": 2048, 00:18:33.021 "data_size": 63488 00:18:33.021 }, 00:18:33.021 { 00:18:33.021 "name": "BaseBdev3", 00:18:33.021 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:33.021 "is_configured": true, 00:18:33.021 "data_offset": 2048, 00:18:33.021 "data_size": 63488 00:18:33.021 }, 00:18:33.021 { 00:18:33.021 "name": "BaseBdev4", 00:18:33.021 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:33.021 "is_configured": true, 00:18:33.021 "data_offset": 2048, 00:18:33.021 "data_size": 63488 00:18:33.021 } 00:18:33.021 ] 00:18:33.021 }' 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.021 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.279 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.279 03:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.215 "name": "raid_bdev1", 00:18:34.215 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:34.215 "strip_size_kb": 64, 00:18:34.215 "state": "online", 00:18:34.215 "raid_level": "raid5f", 00:18:34.215 "superblock": true, 00:18:34.215 "num_base_bdevs": 4, 00:18:34.215 "num_base_bdevs_discovered": 4, 00:18:34.215 "num_base_bdevs_operational": 4, 00:18:34.215 "process": { 00:18:34.215 "type": "rebuild", 00:18:34.215 "target": "spare", 00:18:34.215 "progress": { 00:18:34.215 "blocks": 65280, 00:18:34.215 "percent": 34 00:18:34.215 } 00:18:34.215 }, 00:18:34.215 "base_bdevs_list": [ 00:18:34.215 { 00:18:34.215 "name": "spare", 00:18:34.215 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:34.215 "is_configured": true, 00:18:34.215 "data_offset": 2048, 00:18:34.215 "data_size": 63488 00:18:34.215 }, 00:18:34.215 { 00:18:34.215 "name": "BaseBdev2", 00:18:34.215 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:34.215 "is_configured": true, 00:18:34.215 "data_offset": 2048, 00:18:34.215 "data_size": 63488 00:18:34.215 }, 00:18:34.215 { 00:18:34.215 "name": "BaseBdev3", 00:18:34.215 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:34.215 "is_configured": true, 00:18:34.215 "data_offset": 2048, 00:18:34.215 "data_size": 63488 00:18:34.215 }, 00:18:34.215 { 00:18:34.215 "name": "BaseBdev4", 00:18:34.215 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:34.215 "is_configured": true, 00:18:34.215 "data_offset": 2048, 00:18:34.215 "data_size": 63488 00:18:34.215 } 00:18:34.215 ] 00:18:34.215 }' 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.215 03:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.592 "name": "raid_bdev1", 00:18:35.592 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:35.592 "strip_size_kb": 64, 00:18:35.592 "state": "online", 00:18:35.592 "raid_level": "raid5f", 00:18:35.592 "superblock": true, 00:18:35.592 "num_base_bdevs": 4, 00:18:35.592 "num_base_bdevs_discovered": 4, 00:18:35.592 "num_base_bdevs_operational": 4, 00:18:35.592 "process": { 00:18:35.592 "type": "rebuild", 00:18:35.592 "target": "spare", 00:18:35.592 "progress": { 00:18:35.592 "blocks": 88320, 00:18:35.592 "percent": 46 00:18:35.592 } 00:18:35.592 }, 00:18:35.592 "base_bdevs_list": [ 00:18:35.592 { 00:18:35.592 "name": "spare", 00:18:35.592 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:35.592 "is_configured": true, 00:18:35.592 "data_offset": 2048, 00:18:35.592 "data_size": 63488 00:18:35.592 }, 00:18:35.592 { 00:18:35.592 "name": "BaseBdev2", 00:18:35.592 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:35.592 "is_configured": true, 00:18:35.592 "data_offset": 2048, 00:18:35.592 "data_size": 63488 00:18:35.592 }, 00:18:35.592 { 00:18:35.592 "name": "BaseBdev3", 00:18:35.592 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:35.592 "is_configured": true, 00:18:35.592 "data_offset": 2048, 00:18:35.592 "data_size": 63488 00:18:35.592 }, 00:18:35.592 { 00:18:35.592 "name": "BaseBdev4", 00:18:35.592 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:35.592 "is_configured": true, 00:18:35.592 "data_offset": 2048, 00:18:35.592 "data_size": 63488 00:18:35.592 } 00:18:35.592 ] 00:18:35.592 }' 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.592 03:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.541 03:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.541 03:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.541 03:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.541 "name": "raid_bdev1", 00:18:36.541 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:36.541 "strip_size_kb": 64, 00:18:36.541 "state": "online", 00:18:36.541 "raid_level": "raid5f", 00:18:36.541 "superblock": true, 00:18:36.541 "num_base_bdevs": 4, 00:18:36.541 "num_base_bdevs_discovered": 4, 00:18:36.541 "num_base_bdevs_operational": 4, 00:18:36.541 "process": { 00:18:36.541 "type": "rebuild", 00:18:36.541 "target": "spare", 00:18:36.541 "progress": { 00:18:36.541 "blocks": 109440, 00:18:36.541 "percent": 57 00:18:36.541 } 00:18:36.541 }, 00:18:36.541 "base_bdevs_list": [ 00:18:36.541 { 00:18:36.541 "name": "spare", 00:18:36.541 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:36.541 "is_configured": true, 00:18:36.541 "data_offset": 2048, 00:18:36.541 "data_size": 63488 00:18:36.541 }, 00:18:36.541 { 00:18:36.541 "name": "BaseBdev2", 00:18:36.541 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:36.541 "is_configured": true, 00:18:36.541 "data_offset": 2048, 00:18:36.541 "data_size": 63488 00:18:36.541 }, 00:18:36.541 { 00:18:36.541 "name": "BaseBdev3", 00:18:36.541 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:36.541 "is_configured": true, 00:18:36.541 "data_offset": 2048, 00:18:36.541 "data_size": 63488 00:18:36.541 }, 00:18:36.541 { 00:18:36.541 "name": "BaseBdev4", 00:18:36.541 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:36.541 "is_configured": true, 00:18:36.541 "data_offset": 2048, 00:18:36.541 "data_size": 63488 00:18:36.541 } 00:18:36.541 ] 00:18:36.541 }' 00:18:36.541 03:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.541 03:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.541 03:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.541 03:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.541 03:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.918 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.918 "name": "raid_bdev1", 00:18:37.918 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:37.918 "strip_size_kb": 64, 00:18:37.918 "state": "online", 00:18:37.918 "raid_level": "raid5f", 00:18:37.918 "superblock": true, 00:18:37.918 "num_base_bdevs": 4, 00:18:37.918 "num_base_bdevs_discovered": 4, 00:18:37.918 "num_base_bdevs_operational": 4, 00:18:37.918 "process": { 00:18:37.919 "type": "rebuild", 00:18:37.919 "target": "spare", 00:18:37.919 "progress": { 00:18:37.919 "blocks": 132480, 00:18:37.919 "percent": 69 00:18:37.919 } 00:18:37.919 }, 00:18:37.919 "base_bdevs_list": [ 00:18:37.919 { 00:18:37.919 "name": "spare", 00:18:37.919 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:37.919 "is_configured": true, 00:18:37.919 "data_offset": 2048, 00:18:37.919 "data_size": 63488 00:18:37.919 }, 00:18:37.919 { 00:18:37.919 "name": "BaseBdev2", 00:18:37.919 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:37.919 "is_configured": true, 00:18:37.919 "data_offset": 2048, 00:18:37.919 "data_size": 63488 00:18:37.919 }, 00:18:37.919 { 00:18:37.919 "name": "BaseBdev3", 00:18:37.919 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:37.919 "is_configured": true, 00:18:37.919 "data_offset": 2048, 00:18:37.919 "data_size": 63488 00:18:37.919 }, 00:18:37.919 { 00:18:37.919 "name": "BaseBdev4", 00:18:37.919 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:37.919 "is_configured": true, 00:18:37.919 "data_offset": 2048, 00:18:37.919 "data_size": 63488 00:18:37.919 } 00:18:37.919 ] 00:18:37.919 }' 00:18:37.919 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.919 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.919 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.919 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.919 03:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.856 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.856 "name": "raid_bdev1", 00:18:38.856 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:38.856 "strip_size_kb": 64, 00:18:38.856 "state": "online", 00:18:38.856 "raid_level": "raid5f", 00:18:38.856 "superblock": true, 00:18:38.856 "num_base_bdevs": 4, 00:18:38.856 "num_base_bdevs_discovered": 4, 00:18:38.856 "num_base_bdevs_operational": 4, 00:18:38.856 "process": { 00:18:38.856 "type": "rebuild", 00:18:38.856 "target": "spare", 00:18:38.856 "progress": { 00:18:38.856 "blocks": 153600, 00:18:38.856 "percent": 80 00:18:38.856 } 00:18:38.856 }, 00:18:38.856 "base_bdevs_list": [ 00:18:38.856 { 00:18:38.856 "name": "spare", 00:18:38.856 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:38.856 "is_configured": true, 00:18:38.856 "data_offset": 2048, 00:18:38.856 "data_size": 63488 00:18:38.856 }, 00:18:38.856 { 00:18:38.856 "name": "BaseBdev2", 00:18:38.856 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:38.856 "is_configured": true, 00:18:38.856 "data_offset": 2048, 00:18:38.856 "data_size": 63488 00:18:38.856 }, 00:18:38.856 { 00:18:38.856 "name": "BaseBdev3", 00:18:38.856 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:38.856 "is_configured": true, 00:18:38.857 "data_offset": 2048, 00:18:38.857 "data_size": 63488 00:18:38.857 }, 00:18:38.857 { 00:18:38.857 "name": "BaseBdev4", 00:18:38.857 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:38.857 "is_configured": true, 00:18:38.857 "data_offset": 2048, 00:18:38.857 "data_size": 63488 00:18:38.857 } 00:18:38.857 ] 00:18:38.857 }' 00:18:38.857 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.857 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.857 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.115 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.115 03:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.052 "name": "raid_bdev1", 00:18:40.052 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:40.052 "strip_size_kb": 64, 00:18:40.052 "state": "online", 00:18:40.052 "raid_level": "raid5f", 00:18:40.052 "superblock": true, 00:18:40.052 "num_base_bdevs": 4, 00:18:40.052 "num_base_bdevs_discovered": 4, 00:18:40.052 "num_base_bdevs_operational": 4, 00:18:40.052 "process": { 00:18:40.052 "type": "rebuild", 00:18:40.052 "target": "spare", 00:18:40.052 "progress": { 00:18:40.052 "blocks": 176640, 00:18:40.052 "percent": 92 00:18:40.052 } 00:18:40.052 }, 00:18:40.052 "base_bdevs_list": [ 00:18:40.052 { 00:18:40.052 "name": "spare", 00:18:40.052 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:40.052 "is_configured": true, 00:18:40.052 "data_offset": 2048, 00:18:40.052 "data_size": 63488 00:18:40.052 }, 00:18:40.052 { 00:18:40.052 "name": "BaseBdev2", 00:18:40.052 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:40.052 "is_configured": true, 00:18:40.052 "data_offset": 2048, 00:18:40.052 "data_size": 63488 00:18:40.052 }, 00:18:40.052 { 00:18:40.052 "name": "BaseBdev3", 00:18:40.052 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:40.052 "is_configured": true, 00:18:40.052 "data_offset": 2048, 00:18:40.052 "data_size": 63488 00:18:40.052 }, 00:18:40.052 { 00:18:40.052 "name": "BaseBdev4", 00:18:40.052 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:40.052 "is_configured": true, 00:18:40.052 "data_offset": 2048, 00:18:40.052 "data_size": 63488 00:18:40.052 } 00:18:40.052 ] 00:18:40.052 }' 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.052 03:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.992 [2024-11-05 03:29:54.277406] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:40.992 [2024-11-05 03:29:54.277560] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:40.992 [2024-11-05 03:29:54.277773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.251 "name": "raid_bdev1", 00:18:41.251 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:41.251 "strip_size_kb": 64, 00:18:41.251 "state": "online", 00:18:41.251 "raid_level": "raid5f", 00:18:41.251 "superblock": true, 00:18:41.251 "num_base_bdevs": 4, 00:18:41.251 "num_base_bdevs_discovered": 4, 00:18:41.251 "num_base_bdevs_operational": 4, 00:18:41.251 "base_bdevs_list": [ 00:18:41.251 { 00:18:41.251 "name": "spare", 00:18:41.251 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.251 }, 00:18:41.251 { 00:18:41.251 "name": "BaseBdev2", 00:18:41.251 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.251 }, 00:18:41.251 { 00:18:41.251 "name": "BaseBdev3", 00:18:41.251 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.251 }, 00:18:41.251 { 00:18:41.251 "name": "BaseBdev4", 00:18:41.251 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.251 } 00:18:41.251 ] 00:18:41.251 }' 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.251 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.251 "name": "raid_bdev1", 00:18:41.251 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:41.251 "strip_size_kb": 64, 00:18:41.251 "state": "online", 00:18:41.251 "raid_level": "raid5f", 00:18:41.251 "superblock": true, 00:18:41.251 "num_base_bdevs": 4, 00:18:41.251 "num_base_bdevs_discovered": 4, 00:18:41.251 "num_base_bdevs_operational": 4, 00:18:41.251 "base_bdevs_list": [ 00:18:41.251 { 00:18:41.251 "name": "spare", 00:18:41.251 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.251 }, 00:18:41.251 { 00:18:41.251 "name": "BaseBdev2", 00:18:41.251 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.251 }, 00:18:41.251 { 00:18:41.251 "name": "BaseBdev3", 00:18:41.251 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.251 }, 00:18:41.251 { 00:18:41.251 "name": "BaseBdev4", 00:18:41.251 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:41.251 "is_configured": true, 00:18:41.251 "data_offset": 2048, 00:18:41.251 "data_size": 63488 00:18:41.252 } 00:18:41.252 ] 00:18:41.252 }' 00:18:41.252 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.510 03:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.510 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.510 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.510 "name": "raid_bdev1", 00:18:41.510 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:41.510 "strip_size_kb": 64, 00:18:41.510 "state": "online", 00:18:41.510 "raid_level": "raid5f", 00:18:41.510 "superblock": true, 00:18:41.510 "num_base_bdevs": 4, 00:18:41.510 "num_base_bdevs_discovered": 4, 00:18:41.510 "num_base_bdevs_operational": 4, 00:18:41.510 "base_bdevs_list": [ 00:18:41.510 { 00:18:41.510 "name": "spare", 00:18:41.510 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:41.510 "is_configured": true, 00:18:41.510 "data_offset": 2048, 00:18:41.510 "data_size": 63488 00:18:41.510 }, 00:18:41.510 { 00:18:41.510 "name": "BaseBdev2", 00:18:41.510 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:41.510 "is_configured": true, 00:18:41.510 "data_offset": 2048, 00:18:41.510 "data_size": 63488 00:18:41.510 }, 00:18:41.510 { 00:18:41.510 "name": "BaseBdev3", 00:18:41.510 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:41.510 "is_configured": true, 00:18:41.510 "data_offset": 2048, 00:18:41.510 "data_size": 63488 00:18:41.510 }, 00:18:41.510 { 00:18:41.510 "name": "BaseBdev4", 00:18:41.510 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:41.510 "is_configured": true, 00:18:41.510 "data_offset": 2048, 00:18:41.510 "data_size": 63488 00:18:41.510 } 00:18:41.510 ] 00:18:41.510 }' 00:18:41.510 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.510 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.078 [2024-11-05 03:29:55.512825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.078 [2024-11-05 03:29:55.512867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.078 [2024-11-05 03:29:55.512996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.078 [2024-11-05 03:29:55.513117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.078 [2024-11-05 03:29:55.513146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.078 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:42.337 /dev/nbd0 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.337 1+0 records in 00:18:42.337 1+0 records out 00:18:42.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236597 s, 17.3 MB/s 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.337 03:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:42.597 /dev/nbd1 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.856 1+0 records in 00:18:42.856 1+0 records out 00:18:42.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423045 s, 9.7 MB/s 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.856 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.115 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.375 03:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.375 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.375 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:43.375 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.375 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.634 [2024-11-05 03:29:57.012245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:43.634 [2024-11-05 03:29:57.012373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.634 [2024-11-05 03:29:57.012410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:43.634 [2024-11-05 03:29:57.012429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.634 [2024-11-05 03:29:57.015413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.634 [2024-11-05 03:29:57.015458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:43.634 [2024-11-05 03:29:57.015568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:43.634 [2024-11-05 03:29:57.015630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.634 [2024-11-05 03:29:57.015834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.634 [2024-11-05 03:29:57.015964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.634 [2024-11-05 03:29:57.016092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.634 spare 00:18:43.634 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.634 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:43.634 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.634 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.634 [2024-11-05 03:29:57.116220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:43.634 [2024-11-05 03:29:57.116307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:43.634 [2024-11-05 03:29:57.116776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:43.634 [2024-11-05 03:29:57.123672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:43.635 [2024-11-05 03:29:57.123704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:43.635 [2024-11-05 03:29:57.124023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.635 "name": "raid_bdev1", 00:18:43.635 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:43.635 "strip_size_kb": 64, 00:18:43.635 "state": "online", 00:18:43.635 "raid_level": "raid5f", 00:18:43.635 "superblock": true, 00:18:43.635 "num_base_bdevs": 4, 00:18:43.635 "num_base_bdevs_discovered": 4, 00:18:43.635 "num_base_bdevs_operational": 4, 00:18:43.635 "base_bdevs_list": [ 00:18:43.635 { 00:18:43.635 "name": "spare", 00:18:43.635 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:43.635 "is_configured": true, 00:18:43.635 "data_offset": 2048, 00:18:43.635 "data_size": 63488 00:18:43.635 }, 00:18:43.635 { 00:18:43.635 "name": "BaseBdev2", 00:18:43.635 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:43.635 "is_configured": true, 00:18:43.635 "data_offset": 2048, 00:18:43.635 "data_size": 63488 00:18:43.635 }, 00:18:43.635 { 00:18:43.635 "name": "BaseBdev3", 00:18:43.635 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:43.635 "is_configured": true, 00:18:43.635 "data_offset": 2048, 00:18:43.635 "data_size": 63488 00:18:43.635 }, 00:18:43.635 { 00:18:43.635 "name": "BaseBdev4", 00:18:43.635 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:43.635 "is_configured": true, 00:18:43.635 "data_offset": 2048, 00:18:43.635 "data_size": 63488 00:18:43.635 } 00:18:43.635 ] 00:18:43.635 }' 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.635 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.202 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.202 "name": "raid_bdev1", 00:18:44.202 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:44.202 "strip_size_kb": 64, 00:18:44.202 "state": "online", 00:18:44.202 "raid_level": "raid5f", 00:18:44.202 "superblock": true, 00:18:44.202 "num_base_bdevs": 4, 00:18:44.202 "num_base_bdevs_discovered": 4, 00:18:44.202 "num_base_bdevs_operational": 4, 00:18:44.202 "base_bdevs_list": [ 00:18:44.202 { 00:18:44.202 "name": "spare", 00:18:44.202 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:44.202 "is_configured": true, 00:18:44.202 "data_offset": 2048, 00:18:44.202 "data_size": 63488 00:18:44.202 }, 00:18:44.202 { 00:18:44.202 "name": "BaseBdev2", 00:18:44.203 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:44.203 "is_configured": true, 00:18:44.203 "data_offset": 2048, 00:18:44.203 "data_size": 63488 00:18:44.203 }, 00:18:44.203 { 00:18:44.203 "name": "BaseBdev3", 00:18:44.203 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:44.203 "is_configured": true, 00:18:44.203 "data_offset": 2048, 00:18:44.203 "data_size": 63488 00:18:44.203 }, 00:18:44.203 { 00:18:44.203 "name": "BaseBdev4", 00:18:44.203 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:44.203 "is_configured": true, 00:18:44.203 "data_offset": 2048, 00:18:44.203 "data_size": 63488 00:18:44.203 } 00:18:44.203 ] 00:18:44.203 }' 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:44.203 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.461 [2024-11-05 03:29:57.852466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.461 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.462 "name": "raid_bdev1", 00:18:44.462 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:44.462 "strip_size_kb": 64, 00:18:44.462 "state": "online", 00:18:44.462 "raid_level": "raid5f", 00:18:44.462 "superblock": true, 00:18:44.462 "num_base_bdevs": 4, 00:18:44.462 "num_base_bdevs_discovered": 3, 00:18:44.462 "num_base_bdevs_operational": 3, 00:18:44.462 "base_bdevs_list": [ 00:18:44.462 { 00:18:44.462 "name": null, 00:18:44.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.462 "is_configured": false, 00:18:44.462 "data_offset": 0, 00:18:44.462 "data_size": 63488 00:18:44.462 }, 00:18:44.462 { 00:18:44.462 "name": "BaseBdev2", 00:18:44.462 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:44.462 "is_configured": true, 00:18:44.462 "data_offset": 2048, 00:18:44.462 "data_size": 63488 00:18:44.462 }, 00:18:44.462 { 00:18:44.462 "name": "BaseBdev3", 00:18:44.462 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:44.462 "is_configured": true, 00:18:44.462 "data_offset": 2048, 00:18:44.462 "data_size": 63488 00:18:44.462 }, 00:18:44.462 { 00:18:44.462 "name": "BaseBdev4", 00:18:44.462 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:44.462 "is_configured": true, 00:18:44.462 "data_offset": 2048, 00:18:44.462 "data_size": 63488 00:18:44.462 } 00:18:44.462 ] 00:18:44.462 }' 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.462 03:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.034 03:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:45.034 03:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.034 03:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.034 [2024-11-05 03:29:58.376662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.034 [2024-11-05 03:29:58.376982] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.034 [2024-11-05 03:29:58.377010] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:45.034 [2024-11-05 03:29:58.377061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.034 [2024-11-05 03:29:58.391239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:45.034 03:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.034 03:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:45.034 [2024-11-05 03:29:58.400849] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.979 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.979 "name": "raid_bdev1", 00:18:45.979 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:45.979 "strip_size_kb": 64, 00:18:45.979 "state": "online", 00:18:45.979 "raid_level": "raid5f", 00:18:45.979 "superblock": true, 00:18:45.979 "num_base_bdevs": 4, 00:18:45.979 "num_base_bdevs_discovered": 4, 00:18:45.979 "num_base_bdevs_operational": 4, 00:18:45.979 "process": { 00:18:45.979 "type": "rebuild", 00:18:45.979 "target": "spare", 00:18:45.979 "progress": { 00:18:45.979 "blocks": 17280, 00:18:45.979 "percent": 9 00:18:45.979 } 00:18:45.979 }, 00:18:45.979 "base_bdevs_list": [ 00:18:45.979 { 00:18:45.979 "name": "spare", 00:18:45.979 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:45.979 "is_configured": true, 00:18:45.979 "data_offset": 2048, 00:18:45.979 "data_size": 63488 00:18:45.979 }, 00:18:45.979 { 00:18:45.980 "name": "BaseBdev2", 00:18:45.980 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:45.980 "is_configured": true, 00:18:45.980 "data_offset": 2048, 00:18:45.980 "data_size": 63488 00:18:45.980 }, 00:18:45.980 { 00:18:45.980 "name": "BaseBdev3", 00:18:45.980 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:45.980 "is_configured": true, 00:18:45.980 "data_offset": 2048, 00:18:45.980 "data_size": 63488 00:18:45.980 }, 00:18:45.980 { 00:18:45.980 "name": "BaseBdev4", 00:18:45.980 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:45.980 "is_configured": true, 00:18:45.980 "data_offset": 2048, 00:18:45.980 "data_size": 63488 00:18:45.980 } 00:18:45.980 ] 00:18:45.980 }' 00:18:45.980 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.980 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.980 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.980 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.980 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:45.980 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.980 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.980 [2024-11-05 03:29:59.559369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.980 [2024-11-05 03:29:59.614633] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:45.980 [2024-11-05 03:29:59.614994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.980 [2024-11-05 03:29:59.615025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.980 [2024-11-05 03:29:59.615045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.245 "name": "raid_bdev1", 00:18:46.245 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:46.245 "strip_size_kb": 64, 00:18:46.245 "state": "online", 00:18:46.245 "raid_level": "raid5f", 00:18:46.245 "superblock": true, 00:18:46.245 "num_base_bdevs": 4, 00:18:46.245 "num_base_bdevs_discovered": 3, 00:18:46.245 "num_base_bdevs_operational": 3, 00:18:46.245 "base_bdevs_list": [ 00:18:46.245 { 00:18:46.245 "name": null, 00:18:46.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.245 "is_configured": false, 00:18:46.245 "data_offset": 0, 00:18:46.245 "data_size": 63488 00:18:46.245 }, 00:18:46.245 { 00:18:46.245 "name": "BaseBdev2", 00:18:46.245 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:46.245 "is_configured": true, 00:18:46.245 "data_offset": 2048, 00:18:46.245 "data_size": 63488 00:18:46.245 }, 00:18:46.245 { 00:18:46.245 "name": "BaseBdev3", 00:18:46.245 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:46.245 "is_configured": true, 00:18:46.245 "data_offset": 2048, 00:18:46.245 "data_size": 63488 00:18:46.245 }, 00:18:46.245 { 00:18:46.245 "name": "BaseBdev4", 00:18:46.245 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:46.245 "is_configured": true, 00:18:46.245 "data_offset": 2048, 00:18:46.245 "data_size": 63488 00:18:46.245 } 00:18:46.245 ] 00:18:46.245 }' 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.245 03:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.813 03:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.813 03:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.813 03:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.813 [2024-11-05 03:30:00.172828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.813 [2024-11-05 03:30:00.172921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.813 [2024-11-05 03:30:00.172963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:46.813 [2024-11-05 03:30:00.172983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.813 [2024-11-05 03:30:00.173701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.813 [2024-11-05 03:30:00.173761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.813 [2024-11-05 03:30:00.173889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.813 [2024-11-05 03:30:00.173915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:46.813 [2024-11-05 03:30:00.173928] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:46.813 [2024-11-05 03:30:00.173972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.813 [2024-11-05 03:30:00.187983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:46.813 spare 00:18:46.813 03:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.813 03:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:46.813 [2024-11-05 03:30:00.196977] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.749 "name": "raid_bdev1", 00:18:47.749 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:47.749 "strip_size_kb": 64, 00:18:47.749 "state": "online", 00:18:47.749 "raid_level": "raid5f", 00:18:47.749 "superblock": true, 00:18:47.749 "num_base_bdevs": 4, 00:18:47.749 "num_base_bdevs_discovered": 4, 00:18:47.749 "num_base_bdevs_operational": 4, 00:18:47.749 "process": { 00:18:47.749 "type": "rebuild", 00:18:47.749 "target": "spare", 00:18:47.749 "progress": { 00:18:47.749 "blocks": 17280, 00:18:47.749 "percent": 9 00:18:47.749 } 00:18:47.749 }, 00:18:47.749 "base_bdevs_list": [ 00:18:47.749 { 00:18:47.749 "name": "spare", 00:18:47.749 "uuid": "5fb972a2-0911-5704-a001-5dfb5ae28086", 00:18:47.749 "is_configured": true, 00:18:47.749 "data_offset": 2048, 00:18:47.749 "data_size": 63488 00:18:47.749 }, 00:18:47.749 { 00:18:47.749 "name": "BaseBdev2", 00:18:47.749 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:47.749 "is_configured": true, 00:18:47.749 "data_offset": 2048, 00:18:47.749 "data_size": 63488 00:18:47.749 }, 00:18:47.749 { 00:18:47.749 "name": "BaseBdev3", 00:18:47.749 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:47.749 "is_configured": true, 00:18:47.749 "data_offset": 2048, 00:18:47.749 "data_size": 63488 00:18:47.749 }, 00:18:47.749 { 00:18:47.749 "name": "BaseBdev4", 00:18:47.749 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:47.749 "is_configured": true, 00:18:47.749 "data_offset": 2048, 00:18:47.749 "data_size": 63488 00:18:47.749 } 00:18:47.749 ] 00:18:47.749 }' 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.749 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.749 [2024-11-05 03:30:01.366475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.008 [2024-11-05 03:30:01.410492] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:48.008 [2024-11-05 03:30:01.410595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.008 [2024-11-05 03:30:01.410627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.008 [2024-11-05 03:30:01.410639] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.008 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.008 "name": "raid_bdev1", 00:18:48.008 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:48.008 "strip_size_kb": 64, 00:18:48.008 "state": "online", 00:18:48.008 "raid_level": "raid5f", 00:18:48.008 "superblock": true, 00:18:48.008 "num_base_bdevs": 4, 00:18:48.008 "num_base_bdevs_discovered": 3, 00:18:48.008 "num_base_bdevs_operational": 3, 00:18:48.008 "base_bdevs_list": [ 00:18:48.008 { 00:18:48.008 "name": null, 00:18:48.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.008 "is_configured": false, 00:18:48.008 "data_offset": 0, 00:18:48.008 "data_size": 63488 00:18:48.008 }, 00:18:48.008 { 00:18:48.008 "name": "BaseBdev2", 00:18:48.008 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:48.008 "is_configured": true, 00:18:48.008 "data_offset": 2048, 00:18:48.008 "data_size": 63488 00:18:48.008 }, 00:18:48.008 { 00:18:48.008 "name": "BaseBdev3", 00:18:48.008 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:48.008 "is_configured": true, 00:18:48.008 "data_offset": 2048, 00:18:48.008 "data_size": 63488 00:18:48.008 }, 00:18:48.008 { 00:18:48.009 "name": "BaseBdev4", 00:18:48.009 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:48.009 "is_configured": true, 00:18:48.009 "data_offset": 2048, 00:18:48.009 "data_size": 63488 00:18:48.009 } 00:18:48.009 ] 00:18:48.009 }' 00:18:48.009 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.009 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.578 03:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.578 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.578 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.578 "name": "raid_bdev1", 00:18:48.578 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:48.578 "strip_size_kb": 64, 00:18:48.578 "state": "online", 00:18:48.578 "raid_level": "raid5f", 00:18:48.578 "superblock": true, 00:18:48.578 "num_base_bdevs": 4, 00:18:48.578 "num_base_bdevs_discovered": 3, 00:18:48.578 "num_base_bdevs_operational": 3, 00:18:48.578 "base_bdevs_list": [ 00:18:48.578 { 00:18:48.578 "name": null, 00:18:48.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.578 "is_configured": false, 00:18:48.579 "data_offset": 0, 00:18:48.579 "data_size": 63488 00:18:48.579 }, 00:18:48.579 { 00:18:48.579 "name": "BaseBdev2", 00:18:48.579 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:48.579 "is_configured": true, 00:18:48.579 "data_offset": 2048, 00:18:48.579 "data_size": 63488 00:18:48.579 }, 00:18:48.579 { 00:18:48.579 "name": "BaseBdev3", 00:18:48.579 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:48.579 "is_configured": true, 00:18:48.579 "data_offset": 2048, 00:18:48.579 "data_size": 63488 00:18:48.579 }, 00:18:48.579 { 00:18:48.579 "name": "BaseBdev4", 00:18:48.579 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:48.579 "is_configured": true, 00:18:48.579 "data_offset": 2048, 00:18:48.579 "data_size": 63488 00:18:48.579 } 00:18:48.579 ] 00:18:48.579 }' 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.579 [2024-11-05 03:30:02.170407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:48.579 [2024-11-05 03:30:02.170472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.579 [2024-11-05 03:30:02.170505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:48.579 [2024-11-05 03:30:02.170520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.579 [2024-11-05 03:30:02.171191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.579 [2024-11-05 03:30:02.171220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:48.579 [2024-11-05 03:30:02.171381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:48.579 [2024-11-05 03:30:02.171404] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.579 [2024-11-05 03:30:02.171421] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:48.579 [2024-11-05 03:30:02.171435] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:48.579 BaseBdev1 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.579 03:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.955 "name": "raid_bdev1", 00:18:49.955 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:49.955 "strip_size_kb": 64, 00:18:49.955 "state": "online", 00:18:49.955 "raid_level": "raid5f", 00:18:49.955 "superblock": true, 00:18:49.955 "num_base_bdevs": 4, 00:18:49.955 "num_base_bdevs_discovered": 3, 00:18:49.955 "num_base_bdevs_operational": 3, 00:18:49.955 "base_bdevs_list": [ 00:18:49.955 { 00:18:49.955 "name": null, 00:18:49.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.955 "is_configured": false, 00:18:49.955 "data_offset": 0, 00:18:49.955 "data_size": 63488 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "name": "BaseBdev2", 00:18:49.955 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:49.955 "is_configured": true, 00:18:49.955 "data_offset": 2048, 00:18:49.955 "data_size": 63488 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "name": "BaseBdev3", 00:18:49.955 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:49.955 "is_configured": true, 00:18:49.955 "data_offset": 2048, 00:18:49.955 "data_size": 63488 00:18:49.955 }, 00:18:49.955 { 00:18:49.955 "name": "BaseBdev4", 00:18:49.955 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:49.955 "is_configured": true, 00:18:49.955 "data_offset": 2048, 00:18:49.955 "data_size": 63488 00:18:49.955 } 00:18:49.955 ] 00:18:49.955 }' 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.955 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.214 "name": "raid_bdev1", 00:18:50.214 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:50.214 "strip_size_kb": 64, 00:18:50.214 "state": "online", 00:18:50.214 "raid_level": "raid5f", 00:18:50.214 "superblock": true, 00:18:50.214 "num_base_bdevs": 4, 00:18:50.214 "num_base_bdevs_discovered": 3, 00:18:50.214 "num_base_bdevs_operational": 3, 00:18:50.214 "base_bdevs_list": [ 00:18:50.214 { 00:18:50.214 "name": null, 00:18:50.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.214 "is_configured": false, 00:18:50.214 "data_offset": 0, 00:18:50.214 "data_size": 63488 00:18:50.214 }, 00:18:50.214 { 00:18:50.214 "name": "BaseBdev2", 00:18:50.214 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:50.214 "is_configured": true, 00:18:50.214 "data_offset": 2048, 00:18:50.214 "data_size": 63488 00:18:50.214 }, 00:18:50.214 { 00:18:50.214 "name": "BaseBdev3", 00:18:50.214 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:50.214 "is_configured": true, 00:18:50.214 "data_offset": 2048, 00:18:50.214 "data_size": 63488 00:18:50.214 }, 00:18:50.214 { 00:18:50.214 "name": "BaseBdev4", 00:18:50.214 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:50.214 "is_configured": true, 00:18:50.214 "data_offset": 2048, 00:18:50.214 "data_size": 63488 00:18:50.214 } 00:18:50.214 ] 00:18:50.214 }' 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.214 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.473 [2024-11-05 03:30:03.859291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.473 [2024-11-05 03:30:03.859548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.473 [2024-11-05 03:30:03.859574] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:50.473 request: 00:18:50.473 { 00:18:50.473 "base_bdev": "BaseBdev1", 00:18:50.473 "raid_bdev": "raid_bdev1", 00:18:50.473 "method": "bdev_raid_add_base_bdev", 00:18:50.473 "req_id": 1 00:18:50.473 } 00:18:50.473 Got JSON-RPC error response 00:18:50.473 response: 00:18:50.473 { 00:18:50.473 "code": -22, 00:18:50.473 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:50.473 } 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:50.473 03:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.410 "name": "raid_bdev1", 00:18:51.410 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:51.410 "strip_size_kb": 64, 00:18:51.410 "state": "online", 00:18:51.410 "raid_level": "raid5f", 00:18:51.410 "superblock": true, 00:18:51.410 "num_base_bdevs": 4, 00:18:51.410 "num_base_bdevs_discovered": 3, 00:18:51.410 "num_base_bdevs_operational": 3, 00:18:51.410 "base_bdevs_list": [ 00:18:51.410 { 00:18:51.410 "name": null, 00:18:51.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.410 "is_configured": false, 00:18:51.410 "data_offset": 0, 00:18:51.410 "data_size": 63488 00:18:51.410 }, 00:18:51.410 { 00:18:51.410 "name": "BaseBdev2", 00:18:51.410 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:51.410 "is_configured": true, 00:18:51.410 "data_offset": 2048, 00:18:51.410 "data_size": 63488 00:18:51.410 }, 00:18:51.410 { 00:18:51.410 "name": "BaseBdev3", 00:18:51.410 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:51.410 "is_configured": true, 00:18:51.410 "data_offset": 2048, 00:18:51.410 "data_size": 63488 00:18:51.410 }, 00:18:51.410 { 00:18:51.410 "name": "BaseBdev4", 00:18:51.410 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:51.410 "is_configured": true, 00:18:51.410 "data_offset": 2048, 00:18:51.410 "data_size": 63488 00:18:51.410 } 00:18:51.410 ] 00:18:51.410 }' 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.410 03:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.978 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.978 "name": "raid_bdev1", 00:18:51.978 "uuid": "67467cd0-0fba-471a-a619-c84679a48400", 00:18:51.978 "strip_size_kb": 64, 00:18:51.978 "state": "online", 00:18:51.978 "raid_level": "raid5f", 00:18:51.978 "superblock": true, 00:18:51.978 "num_base_bdevs": 4, 00:18:51.978 "num_base_bdevs_discovered": 3, 00:18:51.978 "num_base_bdevs_operational": 3, 00:18:51.978 "base_bdevs_list": [ 00:18:51.978 { 00:18:51.978 "name": null, 00:18:51.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.978 "is_configured": false, 00:18:51.978 "data_offset": 0, 00:18:51.978 "data_size": 63488 00:18:51.978 }, 00:18:51.978 { 00:18:51.978 "name": "BaseBdev2", 00:18:51.978 "uuid": "aa34cc04-7338-52cc-bab7-ce85150ad231", 00:18:51.978 "is_configured": true, 00:18:51.978 "data_offset": 2048, 00:18:51.978 "data_size": 63488 00:18:51.978 }, 00:18:51.978 { 00:18:51.978 "name": "BaseBdev3", 00:18:51.978 "uuid": "eb10d46d-b470-5f3e-89b4-a42cd039e111", 00:18:51.978 "is_configured": true, 00:18:51.978 "data_offset": 2048, 00:18:51.978 "data_size": 63488 00:18:51.978 }, 00:18:51.979 { 00:18:51.979 "name": "BaseBdev4", 00:18:51.979 "uuid": "76d9024b-3c1d-5296-a77e-1e50901da912", 00:18:51.979 "is_configured": true, 00:18:51.979 "data_offset": 2048, 00:18:51.979 "data_size": 63488 00:18:51.979 } 00:18:51.979 ] 00:18:51.979 }' 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85283 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85283 ']' 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85283 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.979 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85283 00:18:52.238 killing process with pid 85283 00:18:52.238 Received shutdown signal, test time was about 60.000000 seconds 00:18:52.238 00:18:52.238 Latency(us) 00:18:52.238 [2024-11-05T03:30:05.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.238 [2024-11-05T03:30:05.877Z] =================================================================================================================== 00:18:52.238 [2024-11-05T03:30:05.877Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.238 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:52.238 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:52.238 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85283' 00:18:52.238 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85283 00:18:52.238 03:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85283 00:18:52.238 [2024-11-05 03:30:05.630613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.238 [2024-11-05 03:30:05.630801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.238 [2024-11-05 03:30:05.630946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.238 [2024-11-05 03:30:05.630970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:52.497 [2024-11-05 03:30:06.053429] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.435 03:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:53.435 00:18:53.435 real 0m28.672s 00:18:53.435 user 0m37.460s 00:18:53.435 sys 0m2.931s 00:18:53.435 03:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:53.435 03:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.435 ************************************ 00:18:53.435 END TEST raid5f_rebuild_test_sb 00:18:53.435 ************************************ 00:18:53.694 03:30:07 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:53.694 03:30:07 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:53.694 03:30:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:53.694 03:30:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:53.694 03:30:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.694 ************************************ 00:18:53.694 START TEST raid_state_function_test_sb_4k 00:18:53.694 ************************************ 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:53.694 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86106 00:18:53.695 Process raid pid: 86106 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86106' 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86106 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86106 ']' 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.695 03:30:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.695 [2024-11-05 03:30:07.205532] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:18:53.695 [2024-11-05 03:30:07.205708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.953 [2024-11-05 03:30:07.397245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.953 [2024-11-05 03:30:07.525529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.213 [2024-11-05 03:30:07.735768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.213 [2024-11-05 03:30:07.735830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.780 [2024-11-05 03:30:08.210947] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.780 [2024-11-05 03:30:08.211015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.780 [2024-11-05 03:30:08.211031] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.780 [2024-11-05 03:30:08.211046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.780 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.781 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.781 "name": "Existed_Raid", 00:18:54.781 "uuid": "694d7886-9fce-4318-a8d2-4d86d3ed201b", 00:18:54.781 "strip_size_kb": 0, 00:18:54.781 "state": "configuring", 00:18:54.781 "raid_level": "raid1", 00:18:54.781 "superblock": true, 00:18:54.781 "num_base_bdevs": 2, 00:18:54.781 "num_base_bdevs_discovered": 0, 00:18:54.781 "num_base_bdevs_operational": 2, 00:18:54.781 "base_bdevs_list": [ 00:18:54.781 { 00:18:54.781 "name": "BaseBdev1", 00:18:54.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.781 "is_configured": false, 00:18:54.781 "data_offset": 0, 00:18:54.781 "data_size": 0 00:18:54.781 }, 00:18:54.781 { 00:18:54.781 "name": "BaseBdev2", 00:18:54.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.781 "is_configured": false, 00:18:54.781 "data_offset": 0, 00:18:54.781 "data_size": 0 00:18:54.781 } 00:18:54.781 ] 00:18:54.781 }' 00:18:54.781 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.781 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.091 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:55.091 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.091 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 [2024-11-05 03:30:08.707206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.364 [2024-11-05 03:30:08.707283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 [2024-11-05 03:30:08.715140] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.364 [2024-11-05 03:30:08.715200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.364 [2024-11-05 03:30:08.715214] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.364 [2024-11-05 03:30:08.715231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 [2024-11-05 03:30:08.763330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.364 BaseBdev1 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 [ 00:18:55.364 { 00:18:55.364 "name": "BaseBdev1", 00:18:55.364 "aliases": [ 00:18:55.364 "a3423210-39f9-4b06-96f5-ad276b295b40" 00:18:55.364 ], 00:18:55.364 "product_name": "Malloc disk", 00:18:55.364 "block_size": 4096, 00:18:55.364 "num_blocks": 8192, 00:18:55.364 "uuid": "a3423210-39f9-4b06-96f5-ad276b295b40", 00:18:55.364 "assigned_rate_limits": { 00:18:55.364 "rw_ios_per_sec": 0, 00:18:55.364 "rw_mbytes_per_sec": 0, 00:18:55.364 "r_mbytes_per_sec": 0, 00:18:55.364 "w_mbytes_per_sec": 0 00:18:55.364 }, 00:18:55.364 "claimed": true, 00:18:55.364 "claim_type": "exclusive_write", 00:18:55.364 "zoned": false, 00:18:55.364 "supported_io_types": { 00:18:55.364 "read": true, 00:18:55.364 "write": true, 00:18:55.364 "unmap": true, 00:18:55.364 "flush": true, 00:18:55.364 "reset": true, 00:18:55.364 "nvme_admin": false, 00:18:55.364 "nvme_io": false, 00:18:55.364 "nvme_io_md": false, 00:18:55.364 "write_zeroes": true, 00:18:55.364 "zcopy": true, 00:18:55.364 "get_zone_info": false, 00:18:55.364 "zone_management": false, 00:18:55.364 "zone_append": false, 00:18:55.364 "compare": false, 00:18:55.364 "compare_and_write": false, 00:18:55.364 "abort": true, 00:18:55.364 "seek_hole": false, 00:18:55.364 "seek_data": false, 00:18:55.364 "copy": true, 00:18:55.364 "nvme_iov_md": false 00:18:55.364 }, 00:18:55.364 "memory_domains": [ 00:18:55.364 { 00:18:55.364 "dma_device_id": "system", 00:18:55.364 "dma_device_type": 1 00:18:55.364 }, 00:18:55.364 { 00:18:55.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.364 "dma_device_type": 2 00:18:55.364 } 00:18:55.364 ], 00:18:55.364 "driver_specific": {} 00:18:55.364 } 00:18:55.364 ] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.364 "name": "Existed_Raid", 00:18:55.364 "uuid": "d2997f0b-d35a-4379-80e2-82cadd737cd1", 00:18:55.364 "strip_size_kb": 0, 00:18:55.364 "state": "configuring", 00:18:55.364 "raid_level": "raid1", 00:18:55.364 "superblock": true, 00:18:55.364 "num_base_bdevs": 2, 00:18:55.364 "num_base_bdevs_discovered": 1, 00:18:55.364 "num_base_bdevs_operational": 2, 00:18:55.364 "base_bdevs_list": [ 00:18:55.364 { 00:18:55.364 "name": "BaseBdev1", 00:18:55.364 "uuid": "a3423210-39f9-4b06-96f5-ad276b295b40", 00:18:55.364 "is_configured": true, 00:18:55.364 "data_offset": 256, 00:18:55.364 "data_size": 7936 00:18:55.364 }, 00:18:55.364 { 00:18:55.364 "name": "BaseBdev2", 00:18:55.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.364 "is_configured": false, 00:18:55.364 "data_offset": 0, 00:18:55.364 "data_size": 0 00:18:55.364 } 00:18:55.364 ] 00:18:55.364 }' 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.364 03:30:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.932 [2024-11-05 03:30:09.323718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.932 [2024-11-05 03:30:09.323786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.932 [2024-11-05 03:30:09.335766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.932 [2024-11-05 03:30:09.338389] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.932 [2024-11-05 03:30:09.338443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.932 "name": "Existed_Raid", 00:18:55.932 "uuid": "0fc930e7-f7b8-4b8a-b8b2-b29366487628", 00:18:55.932 "strip_size_kb": 0, 00:18:55.932 "state": "configuring", 00:18:55.932 "raid_level": "raid1", 00:18:55.932 "superblock": true, 00:18:55.932 "num_base_bdevs": 2, 00:18:55.932 "num_base_bdevs_discovered": 1, 00:18:55.932 "num_base_bdevs_operational": 2, 00:18:55.932 "base_bdevs_list": [ 00:18:55.932 { 00:18:55.932 "name": "BaseBdev1", 00:18:55.932 "uuid": "a3423210-39f9-4b06-96f5-ad276b295b40", 00:18:55.932 "is_configured": true, 00:18:55.932 "data_offset": 256, 00:18:55.932 "data_size": 7936 00:18:55.932 }, 00:18:55.932 { 00:18:55.932 "name": "BaseBdev2", 00:18:55.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.932 "is_configured": false, 00:18:55.932 "data_offset": 0, 00:18:55.932 "data_size": 0 00:18:55.932 } 00:18:55.932 ] 00:18:55.932 }' 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.932 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.499 [2024-11-05 03:30:09.879962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.499 [2024-11-05 03:30:09.880274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:56.499 [2024-11-05 03:30:09.880291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:56.499 BaseBdev2 00:18:56.499 [2024-11-05 03:30:09.880700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:56.499 [2024-11-05 03:30:09.880895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:56.499 [2024-11-05 03:30:09.881024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:56.499 [2024-11-05 03:30:09.881222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.499 [ 00:18:56.499 { 00:18:56.499 "name": "BaseBdev2", 00:18:56.499 "aliases": [ 00:18:56.499 "37c7b9d4-b995-491e-b229-eb001ff0a4bb" 00:18:56.499 ], 00:18:56.499 "product_name": "Malloc disk", 00:18:56.499 "block_size": 4096, 00:18:56.499 "num_blocks": 8192, 00:18:56.499 "uuid": "37c7b9d4-b995-491e-b229-eb001ff0a4bb", 00:18:56.499 "assigned_rate_limits": { 00:18:56.499 "rw_ios_per_sec": 0, 00:18:56.499 "rw_mbytes_per_sec": 0, 00:18:56.499 "r_mbytes_per_sec": 0, 00:18:56.499 "w_mbytes_per_sec": 0 00:18:56.499 }, 00:18:56.499 "claimed": true, 00:18:56.499 "claim_type": "exclusive_write", 00:18:56.499 "zoned": false, 00:18:56.499 "supported_io_types": { 00:18:56.499 "read": true, 00:18:56.499 "write": true, 00:18:56.499 "unmap": true, 00:18:56.499 "flush": true, 00:18:56.499 "reset": true, 00:18:56.499 "nvme_admin": false, 00:18:56.499 "nvme_io": false, 00:18:56.499 "nvme_io_md": false, 00:18:56.499 "write_zeroes": true, 00:18:56.499 "zcopy": true, 00:18:56.499 "get_zone_info": false, 00:18:56.499 "zone_management": false, 00:18:56.499 "zone_append": false, 00:18:56.499 "compare": false, 00:18:56.499 "compare_and_write": false, 00:18:56.499 "abort": true, 00:18:56.499 "seek_hole": false, 00:18:56.499 "seek_data": false, 00:18:56.499 "copy": true, 00:18:56.499 "nvme_iov_md": false 00:18:56.499 }, 00:18:56.499 "memory_domains": [ 00:18:56.499 { 00:18:56.499 "dma_device_id": "system", 00:18:56.499 "dma_device_type": 1 00:18:56.499 }, 00:18:56.499 { 00:18:56.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.499 "dma_device_type": 2 00:18:56.499 } 00:18:56.499 ], 00:18:56.499 "driver_specific": {} 00:18:56.499 } 00:18:56.499 ] 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.499 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.500 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.500 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.500 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.500 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.500 "name": "Existed_Raid", 00:18:56.500 "uuid": "0fc930e7-f7b8-4b8a-b8b2-b29366487628", 00:18:56.500 "strip_size_kb": 0, 00:18:56.500 "state": "online", 00:18:56.500 "raid_level": "raid1", 00:18:56.500 "superblock": true, 00:18:56.500 "num_base_bdevs": 2, 00:18:56.500 "num_base_bdevs_discovered": 2, 00:18:56.500 "num_base_bdevs_operational": 2, 00:18:56.500 "base_bdevs_list": [ 00:18:56.500 { 00:18:56.500 "name": "BaseBdev1", 00:18:56.500 "uuid": "a3423210-39f9-4b06-96f5-ad276b295b40", 00:18:56.500 "is_configured": true, 00:18:56.500 "data_offset": 256, 00:18:56.500 "data_size": 7936 00:18:56.500 }, 00:18:56.500 { 00:18:56.500 "name": "BaseBdev2", 00:18:56.500 "uuid": "37c7b9d4-b995-491e-b229-eb001ff0a4bb", 00:18:56.500 "is_configured": true, 00:18:56.500 "data_offset": 256, 00:18:56.500 "data_size": 7936 00:18:56.500 } 00:18:56.500 ] 00:18:56.500 }' 00:18:56.500 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.500 03:30:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.068 [2024-11-05 03:30:10.472648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.068 "name": "Existed_Raid", 00:18:57.068 "aliases": [ 00:18:57.068 "0fc930e7-f7b8-4b8a-b8b2-b29366487628" 00:18:57.068 ], 00:18:57.068 "product_name": "Raid Volume", 00:18:57.068 "block_size": 4096, 00:18:57.068 "num_blocks": 7936, 00:18:57.068 "uuid": "0fc930e7-f7b8-4b8a-b8b2-b29366487628", 00:18:57.068 "assigned_rate_limits": { 00:18:57.068 "rw_ios_per_sec": 0, 00:18:57.068 "rw_mbytes_per_sec": 0, 00:18:57.068 "r_mbytes_per_sec": 0, 00:18:57.068 "w_mbytes_per_sec": 0 00:18:57.068 }, 00:18:57.068 "claimed": false, 00:18:57.068 "zoned": false, 00:18:57.068 "supported_io_types": { 00:18:57.068 "read": true, 00:18:57.068 "write": true, 00:18:57.068 "unmap": false, 00:18:57.068 "flush": false, 00:18:57.068 "reset": true, 00:18:57.068 "nvme_admin": false, 00:18:57.068 "nvme_io": false, 00:18:57.068 "nvme_io_md": false, 00:18:57.068 "write_zeroes": true, 00:18:57.068 "zcopy": false, 00:18:57.068 "get_zone_info": false, 00:18:57.068 "zone_management": false, 00:18:57.068 "zone_append": false, 00:18:57.068 "compare": false, 00:18:57.068 "compare_and_write": false, 00:18:57.068 "abort": false, 00:18:57.068 "seek_hole": false, 00:18:57.068 "seek_data": false, 00:18:57.068 "copy": false, 00:18:57.068 "nvme_iov_md": false 00:18:57.068 }, 00:18:57.068 "memory_domains": [ 00:18:57.068 { 00:18:57.068 "dma_device_id": "system", 00:18:57.068 "dma_device_type": 1 00:18:57.068 }, 00:18:57.068 { 00:18:57.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.068 "dma_device_type": 2 00:18:57.068 }, 00:18:57.068 { 00:18:57.068 "dma_device_id": "system", 00:18:57.068 "dma_device_type": 1 00:18:57.068 }, 00:18:57.068 { 00:18:57.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.068 "dma_device_type": 2 00:18:57.068 } 00:18:57.068 ], 00:18:57.068 "driver_specific": { 00:18:57.068 "raid": { 00:18:57.068 "uuid": "0fc930e7-f7b8-4b8a-b8b2-b29366487628", 00:18:57.068 "strip_size_kb": 0, 00:18:57.068 "state": "online", 00:18:57.068 "raid_level": "raid1", 00:18:57.068 "superblock": true, 00:18:57.068 "num_base_bdevs": 2, 00:18:57.068 "num_base_bdevs_discovered": 2, 00:18:57.068 "num_base_bdevs_operational": 2, 00:18:57.068 "base_bdevs_list": [ 00:18:57.068 { 00:18:57.068 "name": "BaseBdev1", 00:18:57.068 "uuid": "a3423210-39f9-4b06-96f5-ad276b295b40", 00:18:57.068 "is_configured": true, 00:18:57.068 "data_offset": 256, 00:18:57.068 "data_size": 7936 00:18:57.068 }, 00:18:57.068 { 00:18:57.068 "name": "BaseBdev2", 00:18:57.068 "uuid": "37c7b9d4-b995-491e-b229-eb001ff0a4bb", 00:18:57.068 "is_configured": true, 00:18:57.068 "data_offset": 256, 00:18:57.068 "data_size": 7936 00:18:57.068 } 00:18:57.068 ] 00:18:57.068 } 00:18:57.068 } 00:18:57.068 }' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:57.068 BaseBdev2' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.068 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.327 [2024-11-05 03:30:10.756502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.327 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.328 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.328 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.328 "name": "Existed_Raid", 00:18:57.328 "uuid": "0fc930e7-f7b8-4b8a-b8b2-b29366487628", 00:18:57.328 "strip_size_kb": 0, 00:18:57.328 "state": "online", 00:18:57.328 "raid_level": "raid1", 00:18:57.328 "superblock": true, 00:18:57.328 "num_base_bdevs": 2, 00:18:57.328 "num_base_bdevs_discovered": 1, 00:18:57.328 "num_base_bdevs_operational": 1, 00:18:57.328 "base_bdevs_list": [ 00:18:57.328 { 00:18:57.328 "name": null, 00:18:57.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.328 "is_configured": false, 00:18:57.328 "data_offset": 0, 00:18:57.328 "data_size": 7936 00:18:57.328 }, 00:18:57.328 { 00:18:57.328 "name": "BaseBdev2", 00:18:57.328 "uuid": "37c7b9d4-b995-491e-b229-eb001ff0a4bb", 00:18:57.328 "is_configured": true, 00:18:57.328 "data_offset": 256, 00:18:57.328 "data_size": 7936 00:18:57.328 } 00:18:57.328 ] 00:18:57.328 }' 00:18:57.328 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.328 03:30:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.895 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.895 [2024-11-05 03:30:11.474147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.895 [2024-11-05 03:30:11.474392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.154 [2024-11-05 03:30:11.570105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.154 [2024-11-05 03:30:11.570424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.154 [2024-11-05 03:30:11.570602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86106 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86106 ']' 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86106 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86106 00:18:58.154 killing process with pid 86106 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86106' 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86106 00:18:58.154 [2024-11-05 03:30:11.667905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:58.154 03:30:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86106 00:18:58.154 [2024-11-05 03:30:11.684386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.530 03:30:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:59.530 00:18:59.530 real 0m5.704s 00:18:59.530 user 0m8.569s 00:18:59.530 sys 0m0.866s 00:18:59.530 03:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:59.530 03:30:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.530 ************************************ 00:18:59.530 END TEST raid_state_function_test_sb_4k 00:18:59.530 ************************************ 00:18:59.530 03:30:12 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:59.530 03:30:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:59.530 03:30:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:59.530 03:30:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.530 ************************************ 00:18:59.530 START TEST raid_superblock_test_4k 00:18:59.530 ************************************ 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:59.530 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86358 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86358 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86358 ']' 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.531 03:30:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.531 [2024-11-05 03:30:12.964405] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:18:59.531 [2024-11-05 03:30:12.964606] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86358 ] 00:18:59.531 [2024-11-05 03:30:13.146082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.789 [2024-11-05 03:30:13.283103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.047 [2024-11-05 03:30:13.500172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.047 [2024-11-05 03:30:13.500207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.613 03:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:00.613 03:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:19:00.613 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:00.613 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:00.613 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:00.613 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.614 malloc1 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.614 03:30:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.614 [2024-11-05 03:30:14.004807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:00.614 [2024-11-05 03:30:14.004899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.614 [2024-11-05 03:30:14.004932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:00.614 [2024-11-05 03:30:14.004947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.614 [2024-11-05 03:30:14.007878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.614 [2024-11-05 03:30:14.008105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:00.614 pt1 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.614 malloc2 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.614 [2024-11-05 03:30:14.062366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:00.614 [2024-11-05 03:30:14.062602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.614 [2024-11-05 03:30:14.062696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:00.614 [2024-11-05 03:30:14.062955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.614 [2024-11-05 03:30:14.066014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.614 [2024-11-05 03:30:14.066236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:00.614 pt2 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.614 [2024-11-05 03:30:14.074644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:00.614 [2024-11-05 03:30:14.077394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:00.614 [2024-11-05 03:30:14.077659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:00.614 [2024-11-05 03:30:14.077697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:00.614 [2024-11-05 03:30:14.078037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:00.614 [2024-11-05 03:30:14.078310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:00.614 [2024-11-05 03:30:14.078360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:00.614 [2024-11-05 03:30:14.078724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.614 "name": "raid_bdev1", 00:19:00.614 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:00.614 "strip_size_kb": 0, 00:19:00.614 "state": "online", 00:19:00.614 "raid_level": "raid1", 00:19:00.614 "superblock": true, 00:19:00.614 "num_base_bdevs": 2, 00:19:00.614 "num_base_bdevs_discovered": 2, 00:19:00.614 "num_base_bdevs_operational": 2, 00:19:00.614 "base_bdevs_list": [ 00:19:00.614 { 00:19:00.614 "name": "pt1", 00:19:00.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.614 "is_configured": true, 00:19:00.614 "data_offset": 256, 00:19:00.614 "data_size": 7936 00:19:00.614 }, 00:19:00.614 { 00:19:00.614 "name": "pt2", 00:19:00.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.614 "is_configured": true, 00:19:00.614 "data_offset": 256, 00:19:00.614 "data_size": 7936 00:19:00.614 } 00:19:00.614 ] 00:19:00.614 }' 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.614 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.182 [2024-11-05 03:30:14.603189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.182 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.182 "name": "raid_bdev1", 00:19:01.182 "aliases": [ 00:19:01.182 "baac91cc-cd58-4e93-bd7e-e9dd574854b6" 00:19:01.182 ], 00:19:01.182 "product_name": "Raid Volume", 00:19:01.182 "block_size": 4096, 00:19:01.182 "num_blocks": 7936, 00:19:01.182 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:01.182 "assigned_rate_limits": { 00:19:01.182 "rw_ios_per_sec": 0, 00:19:01.182 "rw_mbytes_per_sec": 0, 00:19:01.182 "r_mbytes_per_sec": 0, 00:19:01.182 "w_mbytes_per_sec": 0 00:19:01.182 }, 00:19:01.182 "claimed": false, 00:19:01.182 "zoned": false, 00:19:01.182 "supported_io_types": { 00:19:01.182 "read": true, 00:19:01.182 "write": true, 00:19:01.182 "unmap": false, 00:19:01.182 "flush": false, 00:19:01.182 "reset": true, 00:19:01.182 "nvme_admin": false, 00:19:01.182 "nvme_io": false, 00:19:01.182 "nvme_io_md": false, 00:19:01.182 "write_zeroes": true, 00:19:01.182 "zcopy": false, 00:19:01.182 "get_zone_info": false, 00:19:01.182 "zone_management": false, 00:19:01.182 "zone_append": false, 00:19:01.182 "compare": false, 00:19:01.182 "compare_and_write": false, 00:19:01.182 "abort": false, 00:19:01.182 "seek_hole": false, 00:19:01.182 "seek_data": false, 00:19:01.182 "copy": false, 00:19:01.182 "nvme_iov_md": false 00:19:01.182 }, 00:19:01.182 "memory_domains": [ 00:19:01.182 { 00:19:01.182 "dma_device_id": "system", 00:19:01.182 "dma_device_type": 1 00:19:01.182 }, 00:19:01.182 { 00:19:01.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.182 "dma_device_type": 2 00:19:01.182 }, 00:19:01.182 { 00:19:01.182 "dma_device_id": "system", 00:19:01.182 "dma_device_type": 1 00:19:01.182 }, 00:19:01.182 { 00:19:01.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.182 "dma_device_type": 2 00:19:01.182 } 00:19:01.182 ], 00:19:01.182 "driver_specific": { 00:19:01.182 "raid": { 00:19:01.182 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:01.182 "strip_size_kb": 0, 00:19:01.182 "state": "online", 00:19:01.182 "raid_level": "raid1", 00:19:01.182 "superblock": true, 00:19:01.182 "num_base_bdevs": 2, 00:19:01.182 "num_base_bdevs_discovered": 2, 00:19:01.182 "num_base_bdevs_operational": 2, 00:19:01.182 "base_bdevs_list": [ 00:19:01.182 { 00:19:01.182 "name": "pt1", 00:19:01.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.182 "is_configured": true, 00:19:01.182 "data_offset": 256, 00:19:01.182 "data_size": 7936 00:19:01.182 }, 00:19:01.182 { 00:19:01.182 "name": "pt2", 00:19:01.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.182 "is_configured": true, 00:19:01.183 "data_offset": 256, 00:19:01.183 "data_size": 7936 00:19:01.183 } 00:19:01.183 ] 00:19:01.183 } 00:19:01.183 } 00:19:01.183 }' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:01.183 pt2' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.183 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.442 [2024-11-05 03:30:14.871368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=baac91cc-cd58-4e93-bd7e-e9dd574854b6 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z baac91cc-cd58-4e93-bd7e-e9dd574854b6 ']' 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.442 [2024-11-05 03:30:14.918950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.442 [2024-11-05 03:30:14.919134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.442 [2024-11-05 03:30:14.919242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.442 [2024-11-05 03:30:14.919369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.442 [2024-11-05 03:30:14.919394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.442 03:30:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.443 [2024-11-05 03:30:15.059068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:01.443 [2024-11-05 03:30:15.061983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:01.443 [2024-11-05 03:30:15.062074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:01.443 [2024-11-05 03:30:15.062162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:01.443 [2024-11-05 03:30:15.062188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.443 [2024-11-05 03:30:15.062203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:01.443 request: 00:19:01.443 { 00:19:01.443 "name": "raid_bdev1", 00:19:01.443 "raid_level": "raid1", 00:19:01.443 "base_bdevs": [ 00:19:01.443 "malloc1", 00:19:01.443 "malloc2" 00:19:01.443 ], 00:19:01.443 "superblock": false, 00:19:01.443 "method": "bdev_raid_create", 00:19:01.443 "req_id": 1 00:19:01.443 } 00:19:01.443 Got JSON-RPC error response 00:19:01.443 response: 00:19:01.443 { 00:19:01.443 "code": -17, 00:19:01.443 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:01.443 } 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.443 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.702 [2024-11-05 03:30:15.123174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.702 [2024-11-05 03:30:15.123413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.702 [2024-11-05 03:30:15.123449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:01.702 [2024-11-05 03:30:15.123468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.702 [2024-11-05 03:30:15.126777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.702 [2024-11-05 03:30:15.126829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.702 [2024-11-05 03:30:15.126979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:01.702 [2024-11-05 03:30:15.127071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:01.702 pt1 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.702 "name": "raid_bdev1", 00:19:01.702 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:01.702 "strip_size_kb": 0, 00:19:01.702 "state": "configuring", 00:19:01.702 "raid_level": "raid1", 00:19:01.702 "superblock": true, 00:19:01.702 "num_base_bdevs": 2, 00:19:01.702 "num_base_bdevs_discovered": 1, 00:19:01.702 "num_base_bdevs_operational": 2, 00:19:01.702 "base_bdevs_list": [ 00:19:01.702 { 00:19:01.702 "name": "pt1", 00:19:01.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.702 "is_configured": true, 00:19:01.702 "data_offset": 256, 00:19:01.702 "data_size": 7936 00:19:01.702 }, 00:19:01.702 { 00:19:01.702 "name": null, 00:19:01.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.702 "is_configured": false, 00:19:01.702 "data_offset": 256, 00:19:01.702 "data_size": 7936 00:19:01.702 } 00:19:01.702 ] 00:19:01.702 }' 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.702 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.269 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:02.269 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:02.269 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:02.269 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.269 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.269 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.269 [2024-11-05 03:30:15.659700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.269 [2024-11-05 03:30:15.659967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.269 [2024-11-05 03:30:15.660014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:02.269 [2024-11-05 03:30:15.660034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.269 [2024-11-05 03:30:15.660794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.269 [2024-11-05 03:30:15.660844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.269 [2024-11-05 03:30:15.660974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:02.269 [2024-11-05 03:30:15.661011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.269 [2024-11-05 03:30:15.661181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:02.269 [2024-11-05 03:30:15.661201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.269 [2024-11-05 03:30:15.661591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:02.270 [2024-11-05 03:30:15.661822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:02.270 [2024-11-05 03:30:15.661853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:02.270 [2024-11-05 03:30:15.662071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.270 pt2 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.270 "name": "raid_bdev1", 00:19:02.270 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:02.270 "strip_size_kb": 0, 00:19:02.270 "state": "online", 00:19:02.270 "raid_level": "raid1", 00:19:02.270 "superblock": true, 00:19:02.270 "num_base_bdevs": 2, 00:19:02.270 "num_base_bdevs_discovered": 2, 00:19:02.270 "num_base_bdevs_operational": 2, 00:19:02.270 "base_bdevs_list": [ 00:19:02.270 { 00:19:02.270 "name": "pt1", 00:19:02.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.270 "is_configured": true, 00:19:02.270 "data_offset": 256, 00:19:02.270 "data_size": 7936 00:19:02.270 }, 00:19:02.270 { 00:19:02.270 "name": "pt2", 00:19:02.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.270 "is_configured": true, 00:19:02.270 "data_offset": 256, 00:19:02.270 "data_size": 7936 00:19:02.270 } 00:19:02.270 ] 00:19:02.270 }' 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.270 03:30:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.837 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:02.837 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:02.837 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:02.837 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:02.837 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:02.837 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:02.838 [2024-11-05 03:30:16.212284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.838 "name": "raid_bdev1", 00:19:02.838 "aliases": [ 00:19:02.838 "baac91cc-cd58-4e93-bd7e-e9dd574854b6" 00:19:02.838 ], 00:19:02.838 "product_name": "Raid Volume", 00:19:02.838 "block_size": 4096, 00:19:02.838 "num_blocks": 7936, 00:19:02.838 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:02.838 "assigned_rate_limits": { 00:19:02.838 "rw_ios_per_sec": 0, 00:19:02.838 "rw_mbytes_per_sec": 0, 00:19:02.838 "r_mbytes_per_sec": 0, 00:19:02.838 "w_mbytes_per_sec": 0 00:19:02.838 }, 00:19:02.838 "claimed": false, 00:19:02.838 "zoned": false, 00:19:02.838 "supported_io_types": { 00:19:02.838 "read": true, 00:19:02.838 "write": true, 00:19:02.838 "unmap": false, 00:19:02.838 "flush": false, 00:19:02.838 "reset": true, 00:19:02.838 "nvme_admin": false, 00:19:02.838 "nvme_io": false, 00:19:02.838 "nvme_io_md": false, 00:19:02.838 "write_zeroes": true, 00:19:02.838 "zcopy": false, 00:19:02.838 "get_zone_info": false, 00:19:02.838 "zone_management": false, 00:19:02.838 "zone_append": false, 00:19:02.838 "compare": false, 00:19:02.838 "compare_and_write": false, 00:19:02.838 "abort": false, 00:19:02.838 "seek_hole": false, 00:19:02.838 "seek_data": false, 00:19:02.838 "copy": false, 00:19:02.838 "nvme_iov_md": false 00:19:02.838 }, 00:19:02.838 "memory_domains": [ 00:19:02.838 { 00:19:02.838 "dma_device_id": "system", 00:19:02.838 "dma_device_type": 1 00:19:02.838 }, 00:19:02.838 { 00:19:02.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.838 "dma_device_type": 2 00:19:02.838 }, 00:19:02.838 { 00:19:02.838 "dma_device_id": "system", 00:19:02.838 "dma_device_type": 1 00:19:02.838 }, 00:19:02.838 { 00:19:02.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.838 "dma_device_type": 2 00:19:02.838 } 00:19:02.838 ], 00:19:02.838 "driver_specific": { 00:19:02.838 "raid": { 00:19:02.838 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:02.838 "strip_size_kb": 0, 00:19:02.838 "state": "online", 00:19:02.838 "raid_level": "raid1", 00:19:02.838 "superblock": true, 00:19:02.838 "num_base_bdevs": 2, 00:19:02.838 "num_base_bdevs_discovered": 2, 00:19:02.838 "num_base_bdevs_operational": 2, 00:19:02.838 "base_bdevs_list": [ 00:19:02.838 { 00:19:02.838 "name": "pt1", 00:19:02.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.838 "is_configured": true, 00:19:02.838 "data_offset": 256, 00:19:02.838 "data_size": 7936 00:19:02.838 }, 00:19:02.838 { 00:19:02.838 "name": "pt2", 00:19:02.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.838 "is_configured": true, 00:19:02.838 "data_offset": 256, 00:19:02.838 "data_size": 7936 00:19:02.838 } 00:19:02.838 ] 00:19:02.838 } 00:19:02.838 } 00:19:02.838 }' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:02.838 pt2' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.838 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.097 [2024-11-05 03:30:16.484336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' baac91cc-cd58-4e93-bd7e-e9dd574854b6 '!=' baac91cc-cd58-4e93-bd7e-e9dd574854b6 ']' 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.097 [2024-11-05 03:30:16.536048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.097 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.098 "name": "raid_bdev1", 00:19:03.098 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:03.098 "strip_size_kb": 0, 00:19:03.098 "state": "online", 00:19:03.098 "raid_level": "raid1", 00:19:03.098 "superblock": true, 00:19:03.098 "num_base_bdevs": 2, 00:19:03.098 "num_base_bdevs_discovered": 1, 00:19:03.098 "num_base_bdevs_operational": 1, 00:19:03.098 "base_bdevs_list": [ 00:19:03.098 { 00:19:03.098 "name": null, 00:19:03.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.098 "is_configured": false, 00:19:03.098 "data_offset": 0, 00:19:03.098 "data_size": 7936 00:19:03.098 }, 00:19:03.098 { 00:19:03.098 "name": "pt2", 00:19:03.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.098 "is_configured": true, 00:19:03.098 "data_offset": 256, 00:19:03.098 "data_size": 7936 00:19:03.098 } 00:19:03.098 ] 00:19:03.098 }' 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.098 03:30:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.665 [2024-11-05 03:30:17.076206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.665 [2024-11-05 03:30:17.076303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.665 [2024-11-05 03:30:17.076444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.665 [2024-11-05 03:30:17.076512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.665 [2024-11-05 03:30:17.076532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.665 [2024-11-05 03:30:17.152267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.665 [2024-11-05 03:30:17.152592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.665 [2024-11-05 03:30:17.152631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:03.665 [2024-11-05 03:30:17.152660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.665 [2024-11-05 03:30:17.155971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.665 [2024-11-05 03:30:17.156202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.665 [2024-11-05 03:30:17.156362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:03.665 [2024-11-05 03:30:17.156470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.665 [2024-11-05 03:30:17.156666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:03.665 [2024-11-05 03:30:17.156700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:03.665 pt2 00:19:03.665 [2024-11-05 03:30:17.157143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:03.665 [2024-11-05 03:30:17.157409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:03.665 [2024-11-05 03:30:17.157434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.665 [2024-11-05 03:30:17.157689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.665 "name": "raid_bdev1", 00:19:03.665 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:03.665 "strip_size_kb": 0, 00:19:03.665 "state": "online", 00:19:03.665 "raid_level": "raid1", 00:19:03.665 "superblock": true, 00:19:03.665 "num_base_bdevs": 2, 00:19:03.665 "num_base_bdevs_discovered": 1, 00:19:03.665 "num_base_bdevs_operational": 1, 00:19:03.665 "base_bdevs_list": [ 00:19:03.665 { 00:19:03.665 "name": null, 00:19:03.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.665 "is_configured": false, 00:19:03.665 "data_offset": 256, 00:19:03.665 "data_size": 7936 00:19:03.665 }, 00:19:03.665 { 00:19:03.665 "name": "pt2", 00:19:03.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.665 "is_configured": true, 00:19:03.665 "data_offset": 256, 00:19:03.665 "data_size": 7936 00:19:03.665 } 00:19:03.665 ] 00:19:03.665 }' 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.665 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.233 [2024-11-05 03:30:17.688674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.233 [2024-11-05 03:30:17.688718] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.233 [2024-11-05 03:30:17.688809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.233 [2024-11-05 03:30:17.688935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.233 [2024-11-05 03:30:17.688951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.233 [2024-11-05 03:30:17.752729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.233 [2024-11-05 03:30:17.752829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.233 [2024-11-05 03:30:17.752888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:04.233 [2024-11-05 03:30:17.752903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.233 [2024-11-05 03:30:17.755994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.233 [2024-11-05 03:30:17.756039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.233 [2024-11-05 03:30:17.756144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:04.233 [2024-11-05 03:30:17.756220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.233 [2024-11-05 03:30:17.756478] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:04.233 [2024-11-05 03:30:17.756504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.233 [2024-11-05 03:30:17.756529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:04.233 [2024-11-05 03:30:17.756606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.233 [2024-11-05 03:30:17.756736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:04.233 [2024-11-05 03:30:17.756752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:04.233 [2024-11-05 03:30:17.757103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:04.233 [2024-11-05 03:30:17.757275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:04.233 [2024-11-05 03:30:17.757294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:04.233 pt1 00:19:04.233 [2024-11-05 03:30:17.757674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.233 "name": "raid_bdev1", 00:19:04.233 "uuid": "baac91cc-cd58-4e93-bd7e-e9dd574854b6", 00:19:04.233 "strip_size_kb": 0, 00:19:04.233 "state": "online", 00:19:04.233 "raid_level": "raid1", 00:19:04.233 "superblock": true, 00:19:04.233 "num_base_bdevs": 2, 00:19:04.233 "num_base_bdevs_discovered": 1, 00:19:04.233 "num_base_bdevs_operational": 1, 00:19:04.233 "base_bdevs_list": [ 00:19:04.233 { 00:19:04.233 "name": null, 00:19:04.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.233 "is_configured": false, 00:19:04.233 "data_offset": 256, 00:19:04.233 "data_size": 7936 00:19:04.233 }, 00:19:04.233 { 00:19:04.233 "name": "pt2", 00:19:04.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.233 "is_configured": true, 00:19:04.233 "data_offset": 256, 00:19:04.233 "data_size": 7936 00:19:04.233 } 00:19:04.233 ] 00:19:04.233 }' 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.233 03:30:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.800 [2024-11-05 03:30:18.357535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' baac91cc-cd58-4e93-bd7e-e9dd574854b6 '!=' baac91cc-cd58-4e93-bd7e-e9dd574854b6 ']' 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86358 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86358 ']' 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86358 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.800 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86358 00:19:05.058 killing process with pid 86358 00:19:05.058 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:05.058 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:05.058 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86358' 00:19:05.058 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86358 00:19:05.058 [2024-11-05 03:30:18.445680] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.058 03:30:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86358 00:19:05.058 [2024-11-05 03:30:18.445809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.058 [2024-11-05 03:30:18.445902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.058 [2024-11-05 03:30:18.445926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:05.058 [2024-11-05 03:30:18.623325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.435 03:30:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:06.435 00:19:06.435 real 0m6.830s 00:19:06.435 user 0m10.806s 00:19:06.435 sys 0m1.031s 00:19:06.435 03:30:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:06.435 ************************************ 00:19:06.435 END TEST raid_superblock_test_4k 00:19:06.435 03:30:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.435 ************************************ 00:19:06.435 03:30:19 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:06.435 03:30:19 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:06.435 03:30:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:06.435 03:30:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:06.435 03:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.435 ************************************ 00:19:06.435 START TEST raid_rebuild_test_sb_4k 00:19:06.435 ************************************ 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:06.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86692 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86692 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86692 ']' 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.435 03:30:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.435 [2024-11-05 03:30:19.846214] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:06.435 [2024-11-05 03:30:19.846643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86692 ] 00:19:06.435 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:06.435 Zero copy mechanism will not be used. 00:19:06.435 [2024-11-05 03:30:20.022641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.694 [2024-11-05 03:30:20.157748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.953 [2024-11-05 03:30:20.364106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.953 [2024-11-05 03:30:20.364454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 BaseBdev1_malloc 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 [2024-11-05 03:30:20.916837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:07.522 [2024-11-05 03:30:20.916959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.522 [2024-11-05 03:30:20.917060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:07.522 [2024-11-05 03:30:20.917076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.522 [2024-11-05 03:30:20.919948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.522 [2024-11-05 03:30:20.920010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.522 BaseBdev1 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 BaseBdev2_malloc 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 [2024-11-05 03:30:20.974222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:07.522 [2024-11-05 03:30:20.974305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.522 [2024-11-05 03:30:20.974387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:07.522 [2024-11-05 03:30:20.974409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.522 [2024-11-05 03:30:20.977258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.522 [2024-11-05 03:30:20.977342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:07.522 BaseBdev2 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 spare_malloc 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 spare_delay 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 [2024-11-05 03:30:21.048509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:07.522 [2024-11-05 03:30:21.048591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.522 [2024-11-05 03:30:21.048618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:07.522 [2024-11-05 03:30:21.048634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.522 [2024-11-05 03:30:21.051560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.522 [2024-11-05 03:30:21.051858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:07.522 spare 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 [2024-11-05 03:30:21.060736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.522 [2024-11-05 03:30:21.063371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.522 [2024-11-05 03:30:21.063817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:07.522 [2024-11-05 03:30:21.063866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:07.522 [2024-11-05 03:30:21.064199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:07.522 [2024-11-05 03:30:21.064507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:07.522 [2024-11-05 03:30:21.064524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:07.522 [2024-11-05 03:30:21.064783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.522 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.523 "name": "raid_bdev1", 00:19:07.523 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:07.523 "strip_size_kb": 0, 00:19:07.523 "state": "online", 00:19:07.523 "raid_level": "raid1", 00:19:07.523 "superblock": true, 00:19:07.523 "num_base_bdevs": 2, 00:19:07.523 "num_base_bdevs_discovered": 2, 00:19:07.523 "num_base_bdevs_operational": 2, 00:19:07.523 "base_bdevs_list": [ 00:19:07.523 { 00:19:07.523 "name": "BaseBdev1", 00:19:07.523 "uuid": "c5f6aaf2-1f84-55e8-b8ef-df04efb3d1c4", 00:19:07.523 "is_configured": true, 00:19:07.523 "data_offset": 256, 00:19:07.523 "data_size": 7936 00:19:07.523 }, 00:19:07.523 { 00:19:07.523 "name": "BaseBdev2", 00:19:07.523 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:07.523 "is_configured": true, 00:19:07.523 "data_offset": 256, 00:19:07.523 "data_size": 7936 00:19:07.523 } 00:19:07.523 ] 00:19:07.523 }' 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.523 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.090 [2024-11-05 03:30:21.589359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.090 03:30:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:08.349 [2024-11-05 03:30:21.973232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:08.607 /dev/nbd0 00:19:08.607 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.607 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.607 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:08.607 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:08.607 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:08.607 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.608 1+0 records in 00:19:08.608 1+0 records out 00:19:08.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344413 s, 11.9 MB/s 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:08.608 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:09.545 7936+0 records in 00:19:09.545 7936+0 records out 00:19:09.545 32505856 bytes (33 MB, 31 MiB) copied, 0.928673 s, 35.0 MB/s 00:19:09.545 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:09.545 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.545 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:09.545 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.545 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:09.545 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.545 03:30:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.808 [2024-11-05 03:30:23.290364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.808 [2024-11-05 03:30:23.306628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.808 "name": "raid_bdev1", 00:19:09.808 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:09.808 "strip_size_kb": 0, 00:19:09.808 "state": "online", 00:19:09.808 "raid_level": "raid1", 00:19:09.808 "superblock": true, 00:19:09.808 "num_base_bdevs": 2, 00:19:09.808 "num_base_bdevs_discovered": 1, 00:19:09.808 "num_base_bdevs_operational": 1, 00:19:09.808 "base_bdevs_list": [ 00:19:09.808 { 00:19:09.808 "name": null, 00:19:09.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.808 "is_configured": false, 00:19:09.808 "data_offset": 0, 00:19:09.808 "data_size": 7936 00:19:09.808 }, 00:19:09.808 { 00:19:09.808 "name": "BaseBdev2", 00:19:09.808 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:09.808 "is_configured": true, 00:19:09.808 "data_offset": 256, 00:19:09.808 "data_size": 7936 00:19:09.808 } 00:19:09.808 ] 00:19:09.808 }' 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.808 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.398 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.398 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.398 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.398 [2024-11-05 03:30:23.842976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.398 [2024-11-05 03:30:23.861114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:10.398 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.398 03:30:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:10.398 [2024-11-05 03:30:23.864094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.335 "name": "raid_bdev1", 00:19:11.335 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:11.335 "strip_size_kb": 0, 00:19:11.335 "state": "online", 00:19:11.335 "raid_level": "raid1", 00:19:11.335 "superblock": true, 00:19:11.335 "num_base_bdevs": 2, 00:19:11.335 "num_base_bdevs_discovered": 2, 00:19:11.335 "num_base_bdevs_operational": 2, 00:19:11.335 "process": { 00:19:11.335 "type": "rebuild", 00:19:11.335 "target": "spare", 00:19:11.335 "progress": { 00:19:11.335 "blocks": 2560, 00:19:11.335 "percent": 32 00:19:11.335 } 00:19:11.335 }, 00:19:11.335 "base_bdevs_list": [ 00:19:11.335 { 00:19:11.335 "name": "spare", 00:19:11.335 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:11.335 "is_configured": true, 00:19:11.335 "data_offset": 256, 00:19:11.335 "data_size": 7936 00:19:11.335 }, 00:19:11.335 { 00:19:11.335 "name": "BaseBdev2", 00:19:11.335 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:11.335 "is_configured": true, 00:19:11.335 "data_offset": 256, 00:19:11.335 "data_size": 7936 00:19:11.335 } 00:19:11.335 ] 00:19:11.335 }' 00:19:11.335 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.594 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.594 03:30:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.594 [2024-11-05 03:30:25.041631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.594 [2024-11-05 03:30:25.073396] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.594 [2024-11-05 03:30:25.073731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.594 [2024-11-05 03:30:25.073760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.594 [2024-11-05 03:30:25.073785] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.594 "name": "raid_bdev1", 00:19:11.594 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:11.594 "strip_size_kb": 0, 00:19:11.594 "state": "online", 00:19:11.594 "raid_level": "raid1", 00:19:11.594 "superblock": true, 00:19:11.594 "num_base_bdevs": 2, 00:19:11.594 "num_base_bdevs_discovered": 1, 00:19:11.594 "num_base_bdevs_operational": 1, 00:19:11.594 "base_bdevs_list": [ 00:19:11.594 { 00:19:11.594 "name": null, 00:19:11.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.594 "is_configured": false, 00:19:11.594 "data_offset": 0, 00:19:11.594 "data_size": 7936 00:19:11.594 }, 00:19:11.594 { 00:19:11.594 "name": "BaseBdev2", 00:19:11.594 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:11.594 "is_configured": true, 00:19:11.594 "data_offset": 256, 00:19:11.594 "data_size": 7936 00:19:11.594 } 00:19:11.594 ] 00:19:11.594 }' 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.594 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.161 "name": "raid_bdev1", 00:19:12.161 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:12.161 "strip_size_kb": 0, 00:19:12.161 "state": "online", 00:19:12.161 "raid_level": "raid1", 00:19:12.161 "superblock": true, 00:19:12.161 "num_base_bdevs": 2, 00:19:12.161 "num_base_bdevs_discovered": 1, 00:19:12.161 "num_base_bdevs_operational": 1, 00:19:12.161 "base_bdevs_list": [ 00:19:12.161 { 00:19:12.161 "name": null, 00:19:12.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.161 "is_configured": false, 00:19:12.161 "data_offset": 0, 00:19:12.161 "data_size": 7936 00:19:12.161 }, 00:19:12.161 { 00:19:12.161 "name": "BaseBdev2", 00:19:12.161 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:12.161 "is_configured": true, 00:19:12.161 "data_offset": 256, 00:19:12.161 "data_size": 7936 00:19:12.161 } 00:19:12.161 ] 00:19:12.161 }' 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.161 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.420 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.420 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.420 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.420 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.420 [2024-11-05 03:30:25.818081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.420 [2024-11-05 03:30:25.832900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:12.420 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.420 03:30:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:12.420 [2024-11-05 03:30:25.835538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.355 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.355 "name": "raid_bdev1", 00:19:13.355 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:13.355 "strip_size_kb": 0, 00:19:13.355 "state": "online", 00:19:13.355 "raid_level": "raid1", 00:19:13.355 "superblock": true, 00:19:13.355 "num_base_bdevs": 2, 00:19:13.355 "num_base_bdevs_discovered": 2, 00:19:13.355 "num_base_bdevs_operational": 2, 00:19:13.355 "process": { 00:19:13.355 "type": "rebuild", 00:19:13.355 "target": "spare", 00:19:13.355 "progress": { 00:19:13.355 "blocks": 2560, 00:19:13.355 "percent": 32 00:19:13.355 } 00:19:13.355 }, 00:19:13.355 "base_bdevs_list": [ 00:19:13.355 { 00:19:13.355 "name": "spare", 00:19:13.356 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:13.356 "is_configured": true, 00:19:13.356 "data_offset": 256, 00:19:13.356 "data_size": 7936 00:19:13.356 }, 00:19:13.356 { 00:19:13.356 "name": "BaseBdev2", 00:19:13.356 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:13.356 "is_configured": true, 00:19:13.356 "data_offset": 256, 00:19:13.356 "data_size": 7936 00:19:13.356 } 00:19:13.356 ] 00:19:13.356 }' 00:19:13.356 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.356 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.356 03:30:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:13.614 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=729 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.614 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.615 "name": "raid_bdev1", 00:19:13.615 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:13.615 "strip_size_kb": 0, 00:19:13.615 "state": "online", 00:19:13.615 "raid_level": "raid1", 00:19:13.615 "superblock": true, 00:19:13.615 "num_base_bdevs": 2, 00:19:13.615 "num_base_bdevs_discovered": 2, 00:19:13.615 "num_base_bdevs_operational": 2, 00:19:13.615 "process": { 00:19:13.615 "type": "rebuild", 00:19:13.615 "target": "spare", 00:19:13.615 "progress": { 00:19:13.615 "blocks": 2816, 00:19:13.615 "percent": 35 00:19:13.615 } 00:19:13.615 }, 00:19:13.615 "base_bdevs_list": [ 00:19:13.615 { 00:19:13.615 "name": "spare", 00:19:13.615 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:13.615 "is_configured": true, 00:19:13.615 "data_offset": 256, 00:19:13.615 "data_size": 7936 00:19:13.615 }, 00:19:13.615 { 00:19:13.615 "name": "BaseBdev2", 00:19:13.615 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:13.615 "is_configured": true, 00:19:13.615 "data_offset": 256, 00:19:13.615 "data_size": 7936 00:19:13.615 } 00:19:13.615 ] 00:19:13.615 }' 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.615 03:30:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.550 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.809 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.809 "name": "raid_bdev1", 00:19:14.809 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:14.809 "strip_size_kb": 0, 00:19:14.809 "state": "online", 00:19:14.809 "raid_level": "raid1", 00:19:14.809 "superblock": true, 00:19:14.809 "num_base_bdevs": 2, 00:19:14.809 "num_base_bdevs_discovered": 2, 00:19:14.809 "num_base_bdevs_operational": 2, 00:19:14.809 "process": { 00:19:14.809 "type": "rebuild", 00:19:14.809 "target": "spare", 00:19:14.809 "progress": { 00:19:14.809 "blocks": 5888, 00:19:14.809 "percent": 74 00:19:14.809 } 00:19:14.809 }, 00:19:14.809 "base_bdevs_list": [ 00:19:14.809 { 00:19:14.809 "name": "spare", 00:19:14.809 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:14.809 "is_configured": true, 00:19:14.809 "data_offset": 256, 00:19:14.809 "data_size": 7936 00:19:14.809 }, 00:19:14.809 { 00:19:14.809 "name": "BaseBdev2", 00:19:14.809 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:14.809 "is_configured": true, 00:19:14.809 "data_offset": 256, 00:19:14.809 "data_size": 7936 00:19:14.809 } 00:19:14.809 ] 00:19:14.809 }' 00:19:14.809 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.809 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.809 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.809 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.809 03:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.376 [2024-11-05 03:30:28.956916] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:15.376 [2024-11-05 03:30:28.957019] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:15.376 [2024-11-05 03:30:28.957163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.988 "name": "raid_bdev1", 00:19:15.988 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:15.988 "strip_size_kb": 0, 00:19:15.988 "state": "online", 00:19:15.988 "raid_level": "raid1", 00:19:15.988 "superblock": true, 00:19:15.988 "num_base_bdevs": 2, 00:19:15.988 "num_base_bdevs_discovered": 2, 00:19:15.988 "num_base_bdevs_operational": 2, 00:19:15.988 "base_bdevs_list": [ 00:19:15.988 { 00:19:15.988 "name": "spare", 00:19:15.988 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:15.988 "is_configured": true, 00:19:15.988 "data_offset": 256, 00:19:15.988 "data_size": 7936 00:19:15.988 }, 00:19:15.988 { 00:19:15.988 "name": "BaseBdev2", 00:19:15.988 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:15.988 "is_configured": true, 00:19:15.988 "data_offset": 256, 00:19:15.988 "data_size": 7936 00:19:15.988 } 00:19:15.988 ] 00:19:15.988 }' 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.988 "name": "raid_bdev1", 00:19:15.988 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:15.988 "strip_size_kb": 0, 00:19:15.988 "state": "online", 00:19:15.988 "raid_level": "raid1", 00:19:15.988 "superblock": true, 00:19:15.988 "num_base_bdevs": 2, 00:19:15.988 "num_base_bdevs_discovered": 2, 00:19:15.988 "num_base_bdevs_operational": 2, 00:19:15.988 "base_bdevs_list": [ 00:19:15.988 { 00:19:15.988 "name": "spare", 00:19:15.988 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:15.988 "is_configured": true, 00:19:15.988 "data_offset": 256, 00:19:15.988 "data_size": 7936 00:19:15.988 }, 00:19:15.988 { 00:19:15.988 "name": "BaseBdev2", 00:19:15.988 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:15.988 "is_configured": true, 00:19:15.988 "data_offset": 256, 00:19:15.988 "data_size": 7936 00:19:15.988 } 00:19:15.988 ] 00:19:15.988 }' 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.988 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.248 "name": "raid_bdev1", 00:19:16.248 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:16.248 "strip_size_kb": 0, 00:19:16.248 "state": "online", 00:19:16.248 "raid_level": "raid1", 00:19:16.248 "superblock": true, 00:19:16.248 "num_base_bdevs": 2, 00:19:16.248 "num_base_bdevs_discovered": 2, 00:19:16.248 "num_base_bdevs_operational": 2, 00:19:16.248 "base_bdevs_list": [ 00:19:16.248 { 00:19:16.248 "name": "spare", 00:19:16.248 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:16.248 "is_configured": true, 00:19:16.248 "data_offset": 256, 00:19:16.248 "data_size": 7936 00:19:16.248 }, 00:19:16.248 { 00:19:16.248 "name": "BaseBdev2", 00:19:16.248 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:16.248 "is_configured": true, 00:19:16.248 "data_offset": 256, 00:19:16.248 "data_size": 7936 00:19:16.248 } 00:19:16.248 ] 00:19:16.248 }' 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.248 03:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.507 [2024-11-05 03:30:30.106111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.507 [2024-11-05 03:30:30.106165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.507 [2024-11-05 03:30:30.106423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.507 [2024-11-05 03:30:30.106553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.507 [2024-11-05 03:30:30.106574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.507 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.765 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:17.023 /dev/nbd0 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:17.023 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.024 1+0 records in 00:19:17.024 1+0 records out 00:19:17.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250785 s, 16.3 MB/s 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:17.024 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:17.282 /dev/nbd1 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.282 1+0 records in 00:19:17.282 1+0 records out 00:19:17.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368397 s, 11.1 MB/s 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:17.282 03:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:17.541 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:17.541 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:17.541 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:17.541 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:17.541 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:17.541 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.541 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.799 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.058 [2024-11-05 03:30:31.554859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.058 [2024-11-05 03:30:31.554937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.058 [2024-11-05 03:30:31.555000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:18.058 [2024-11-05 03:30:31.555019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.058 [2024-11-05 03:30:31.558362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.058 [2024-11-05 03:30:31.558405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.058 [2024-11-05 03:30:31.558603] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.058 [2024-11-05 03:30:31.558719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.058 [2024-11-05 03:30:31.558951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.058 spare 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.058 [2024-11-05 03:30:31.659125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:18.058 [2024-11-05 03:30:31.659155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:18.058 [2024-11-05 03:30:31.659519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:18.058 [2024-11-05 03:30:31.659798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:18.058 [2024-11-05 03:30:31.659822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:18.058 [2024-11-05 03:30:31.660044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.058 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.317 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.317 "name": "raid_bdev1", 00:19:18.317 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:18.317 "strip_size_kb": 0, 00:19:18.317 "state": "online", 00:19:18.317 "raid_level": "raid1", 00:19:18.317 "superblock": true, 00:19:18.317 "num_base_bdevs": 2, 00:19:18.317 "num_base_bdevs_discovered": 2, 00:19:18.317 "num_base_bdevs_operational": 2, 00:19:18.317 "base_bdevs_list": [ 00:19:18.317 { 00:19:18.317 "name": "spare", 00:19:18.317 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:18.317 "is_configured": true, 00:19:18.317 "data_offset": 256, 00:19:18.317 "data_size": 7936 00:19:18.317 }, 00:19:18.317 { 00:19:18.317 "name": "BaseBdev2", 00:19:18.317 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:18.317 "is_configured": true, 00:19:18.317 "data_offset": 256, 00:19:18.317 "data_size": 7936 00:19:18.317 } 00:19:18.317 ] 00:19:18.317 }' 00:19:18.317 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.317 03:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.575 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.834 "name": "raid_bdev1", 00:19:18.834 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:18.834 "strip_size_kb": 0, 00:19:18.834 "state": "online", 00:19:18.834 "raid_level": "raid1", 00:19:18.834 "superblock": true, 00:19:18.834 "num_base_bdevs": 2, 00:19:18.834 "num_base_bdevs_discovered": 2, 00:19:18.834 "num_base_bdevs_operational": 2, 00:19:18.834 "base_bdevs_list": [ 00:19:18.834 { 00:19:18.834 "name": "spare", 00:19:18.834 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:18.834 "is_configured": true, 00:19:18.834 "data_offset": 256, 00:19:18.834 "data_size": 7936 00:19:18.834 }, 00:19:18.834 { 00:19:18.834 "name": "BaseBdev2", 00:19:18.834 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:18.834 "is_configured": true, 00:19:18.834 "data_offset": 256, 00:19:18.834 "data_size": 7936 00:19:18.834 } 00:19:18.834 ] 00:19:18.834 }' 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.834 [2024-11-05 03:30:32.411372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.834 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.093 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.093 "name": "raid_bdev1", 00:19:19.093 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:19.093 "strip_size_kb": 0, 00:19:19.093 "state": "online", 00:19:19.093 "raid_level": "raid1", 00:19:19.093 "superblock": true, 00:19:19.093 "num_base_bdevs": 2, 00:19:19.093 "num_base_bdevs_discovered": 1, 00:19:19.093 "num_base_bdevs_operational": 1, 00:19:19.093 "base_bdevs_list": [ 00:19:19.093 { 00:19:19.093 "name": null, 00:19:19.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.093 "is_configured": false, 00:19:19.093 "data_offset": 0, 00:19:19.093 "data_size": 7936 00:19:19.093 }, 00:19:19.093 { 00:19:19.093 "name": "BaseBdev2", 00:19:19.093 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:19.093 "is_configured": true, 00:19:19.093 "data_offset": 256, 00:19:19.093 "data_size": 7936 00:19:19.093 } 00:19:19.093 ] 00:19:19.093 }' 00:19:19.093 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.093 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.352 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.352 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.352 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.352 [2024-11-05 03:30:32.943581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.352 [2024-11-05 03:30:32.943882] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.352 [2024-11-05 03:30:32.943904] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.352 [2024-11-05 03:30:32.943970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.352 [2024-11-05 03:30:32.959855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:19.352 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.352 03:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:19.352 [2024-11-05 03:30:32.962805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.728 03:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.728 "name": "raid_bdev1", 00:19:20.728 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:20.728 "strip_size_kb": 0, 00:19:20.728 "state": "online", 00:19:20.728 "raid_level": "raid1", 00:19:20.728 "superblock": true, 00:19:20.728 "num_base_bdevs": 2, 00:19:20.728 "num_base_bdevs_discovered": 2, 00:19:20.728 "num_base_bdevs_operational": 2, 00:19:20.728 "process": { 00:19:20.728 "type": "rebuild", 00:19:20.728 "target": "spare", 00:19:20.728 "progress": { 00:19:20.728 "blocks": 2560, 00:19:20.728 "percent": 32 00:19:20.728 } 00:19:20.728 }, 00:19:20.728 "base_bdevs_list": [ 00:19:20.728 { 00:19:20.728 "name": "spare", 00:19:20.728 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:20.728 "is_configured": true, 00:19:20.728 "data_offset": 256, 00:19:20.728 "data_size": 7936 00:19:20.728 }, 00:19:20.728 { 00:19:20.728 "name": "BaseBdev2", 00:19:20.728 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:20.728 "is_configured": true, 00:19:20.728 "data_offset": 256, 00:19:20.728 "data_size": 7936 00:19:20.728 } 00:19:20.728 ] 00:19:20.728 }' 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.728 [2024-11-05 03:30:34.136220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.728 [2024-11-05 03:30:34.171184] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.728 [2024-11-05 03:30:34.171278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.728 [2024-11-05 03:30:34.171302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.728 [2024-11-05 03:30:34.171335] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.728 "name": "raid_bdev1", 00:19:20.728 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:20.728 "strip_size_kb": 0, 00:19:20.728 "state": "online", 00:19:20.728 "raid_level": "raid1", 00:19:20.728 "superblock": true, 00:19:20.728 "num_base_bdevs": 2, 00:19:20.728 "num_base_bdevs_discovered": 1, 00:19:20.728 "num_base_bdevs_operational": 1, 00:19:20.728 "base_bdevs_list": [ 00:19:20.728 { 00:19:20.728 "name": null, 00:19:20.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.728 "is_configured": false, 00:19:20.728 "data_offset": 0, 00:19:20.728 "data_size": 7936 00:19:20.728 }, 00:19:20.728 { 00:19:20.728 "name": "BaseBdev2", 00:19:20.728 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:20.728 "is_configured": true, 00:19:20.728 "data_offset": 256, 00:19:20.728 "data_size": 7936 00:19:20.728 } 00:19:20.728 ] 00:19:20.728 }' 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.728 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.295 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.295 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.295 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.295 [2024-11-05 03:30:34.731470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.295 [2024-11-05 03:30:34.731575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.295 [2024-11-05 03:30:34.731607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:21.295 [2024-11-05 03:30:34.731628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.295 [2024-11-05 03:30:34.732305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.295 [2024-11-05 03:30:34.732399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.295 [2024-11-05 03:30:34.732523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:21.295 [2024-11-05 03:30:34.732548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.295 [2024-11-05 03:30:34.732562] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:21.295 [2024-11-05 03:30:34.732618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.295 [2024-11-05 03:30:34.747767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:21.295 spare 00:19:21.295 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.295 03:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:21.295 [2024-11-05 03:30:34.750475] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.230 "name": "raid_bdev1", 00:19:22.230 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:22.230 "strip_size_kb": 0, 00:19:22.230 "state": "online", 00:19:22.230 "raid_level": "raid1", 00:19:22.230 "superblock": true, 00:19:22.230 "num_base_bdevs": 2, 00:19:22.230 "num_base_bdevs_discovered": 2, 00:19:22.230 "num_base_bdevs_operational": 2, 00:19:22.230 "process": { 00:19:22.230 "type": "rebuild", 00:19:22.230 "target": "spare", 00:19:22.230 "progress": { 00:19:22.230 "blocks": 2560, 00:19:22.230 "percent": 32 00:19:22.230 } 00:19:22.230 }, 00:19:22.230 "base_bdevs_list": [ 00:19:22.230 { 00:19:22.230 "name": "spare", 00:19:22.230 "uuid": "312686ff-2707-5aae-a480-f275f5975127", 00:19:22.230 "is_configured": true, 00:19:22.230 "data_offset": 256, 00:19:22.230 "data_size": 7936 00:19:22.230 }, 00:19:22.230 { 00:19:22.230 "name": "BaseBdev2", 00:19:22.230 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:22.230 "is_configured": true, 00:19:22.230 "data_offset": 256, 00:19:22.230 "data_size": 7936 00:19:22.230 } 00:19:22.230 ] 00:19:22.230 }' 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.230 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.490 [2024-11-05 03:30:35.923945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.490 [2024-11-05 03:30:35.958852] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.490 [2024-11-05 03:30:35.958934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.490 [2024-11-05 03:30:35.958964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.490 [2024-11-05 03:30:35.958975] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.490 03:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.490 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.490 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.490 "name": "raid_bdev1", 00:19:22.490 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:22.490 "strip_size_kb": 0, 00:19:22.490 "state": "online", 00:19:22.490 "raid_level": "raid1", 00:19:22.490 "superblock": true, 00:19:22.490 "num_base_bdevs": 2, 00:19:22.490 "num_base_bdevs_discovered": 1, 00:19:22.490 "num_base_bdevs_operational": 1, 00:19:22.490 "base_bdevs_list": [ 00:19:22.490 { 00:19:22.490 "name": null, 00:19:22.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.490 "is_configured": false, 00:19:22.490 "data_offset": 0, 00:19:22.490 "data_size": 7936 00:19:22.490 }, 00:19:22.490 { 00:19:22.490 "name": "BaseBdev2", 00:19:22.490 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:22.490 "is_configured": true, 00:19:22.490 "data_offset": 256, 00:19:22.490 "data_size": 7936 00:19:22.490 } 00:19:22.490 ] 00:19:22.490 }' 00:19:22.490 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.490 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.057 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.057 "name": "raid_bdev1", 00:19:23.057 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:23.057 "strip_size_kb": 0, 00:19:23.057 "state": "online", 00:19:23.057 "raid_level": "raid1", 00:19:23.057 "superblock": true, 00:19:23.057 "num_base_bdevs": 2, 00:19:23.057 "num_base_bdevs_discovered": 1, 00:19:23.057 "num_base_bdevs_operational": 1, 00:19:23.057 "base_bdevs_list": [ 00:19:23.057 { 00:19:23.057 "name": null, 00:19:23.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.057 "is_configured": false, 00:19:23.057 "data_offset": 0, 00:19:23.057 "data_size": 7936 00:19:23.057 }, 00:19:23.057 { 00:19:23.057 "name": "BaseBdev2", 00:19:23.057 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:23.057 "is_configured": true, 00:19:23.058 "data_offset": 256, 00:19:23.058 "data_size": 7936 00:19:23.058 } 00:19:23.058 ] 00:19:23.058 }' 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.058 [2024-11-05 03:30:36.682861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:23.058 [2024-11-05 03:30:36.682959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.058 [2024-11-05 03:30:36.682999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:23.058 [2024-11-05 03:30:36.683027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.058 [2024-11-05 03:30:36.683612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.058 [2024-11-05 03:30:36.683642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.058 [2024-11-05 03:30:36.683755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:23.058 [2024-11-05 03:30:36.683795] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.058 [2024-11-05 03:30:36.683810] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.058 [2024-11-05 03:30:36.683823] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:23.058 BaseBdev1 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.058 03:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.437 "name": "raid_bdev1", 00:19:24.437 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:24.437 "strip_size_kb": 0, 00:19:24.437 "state": "online", 00:19:24.437 "raid_level": "raid1", 00:19:24.437 "superblock": true, 00:19:24.437 "num_base_bdevs": 2, 00:19:24.437 "num_base_bdevs_discovered": 1, 00:19:24.437 "num_base_bdevs_operational": 1, 00:19:24.437 "base_bdevs_list": [ 00:19:24.437 { 00:19:24.437 "name": null, 00:19:24.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.437 "is_configured": false, 00:19:24.437 "data_offset": 0, 00:19:24.437 "data_size": 7936 00:19:24.437 }, 00:19:24.437 { 00:19:24.437 "name": "BaseBdev2", 00:19:24.437 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:24.437 "is_configured": true, 00:19:24.437 "data_offset": 256, 00:19:24.437 "data_size": 7936 00:19:24.437 } 00:19:24.437 ] 00:19:24.437 }' 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.437 03:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.696 "name": "raid_bdev1", 00:19:24.696 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:24.696 "strip_size_kb": 0, 00:19:24.696 "state": "online", 00:19:24.696 "raid_level": "raid1", 00:19:24.696 "superblock": true, 00:19:24.696 "num_base_bdevs": 2, 00:19:24.696 "num_base_bdevs_discovered": 1, 00:19:24.696 "num_base_bdevs_operational": 1, 00:19:24.696 "base_bdevs_list": [ 00:19:24.696 { 00:19:24.696 "name": null, 00:19:24.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.696 "is_configured": false, 00:19:24.696 "data_offset": 0, 00:19:24.696 "data_size": 7936 00:19:24.696 }, 00:19:24.696 { 00:19:24.696 "name": "BaseBdev2", 00:19:24.696 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:24.696 "is_configured": true, 00:19:24.696 "data_offset": 256, 00:19:24.696 "data_size": 7936 00:19:24.696 } 00:19:24.696 ] 00:19:24.696 }' 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.696 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.955 [2024-11-05 03:30:38.383440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.955 [2024-11-05 03:30:38.383681] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.955 [2024-11-05 03:30:38.383748] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:24.955 request: 00:19:24.955 { 00:19:24.955 "base_bdev": "BaseBdev1", 00:19:24.955 "raid_bdev": "raid_bdev1", 00:19:24.955 "method": "bdev_raid_add_base_bdev", 00:19:24.955 "req_id": 1 00:19:24.955 } 00:19:24.955 Got JSON-RPC error response 00:19:24.955 response: 00:19:24.955 { 00:19:24.955 "code": -22, 00:19:24.955 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:24.955 } 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.955 03:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.891 "name": "raid_bdev1", 00:19:25.891 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:25.891 "strip_size_kb": 0, 00:19:25.891 "state": "online", 00:19:25.891 "raid_level": "raid1", 00:19:25.891 "superblock": true, 00:19:25.891 "num_base_bdevs": 2, 00:19:25.891 "num_base_bdevs_discovered": 1, 00:19:25.891 "num_base_bdevs_operational": 1, 00:19:25.891 "base_bdevs_list": [ 00:19:25.891 { 00:19:25.891 "name": null, 00:19:25.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.891 "is_configured": false, 00:19:25.891 "data_offset": 0, 00:19:25.891 "data_size": 7936 00:19:25.891 }, 00:19:25.891 { 00:19:25.891 "name": "BaseBdev2", 00:19:25.891 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:25.891 "is_configured": true, 00:19:25.891 "data_offset": 256, 00:19:25.891 "data_size": 7936 00:19:25.891 } 00:19:25.891 ] 00:19:25.891 }' 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.891 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.458 "name": "raid_bdev1", 00:19:26.458 "uuid": "853695f6-935b-47f2-b877-2ab3e69c302b", 00:19:26.458 "strip_size_kb": 0, 00:19:26.458 "state": "online", 00:19:26.458 "raid_level": "raid1", 00:19:26.458 "superblock": true, 00:19:26.458 "num_base_bdevs": 2, 00:19:26.458 "num_base_bdevs_discovered": 1, 00:19:26.458 "num_base_bdevs_operational": 1, 00:19:26.458 "base_bdevs_list": [ 00:19:26.458 { 00:19:26.458 "name": null, 00:19:26.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.458 "is_configured": false, 00:19:26.458 "data_offset": 0, 00:19:26.458 "data_size": 7936 00:19:26.458 }, 00:19:26.458 { 00:19:26.458 "name": "BaseBdev2", 00:19:26.458 "uuid": "4ef7ac04-48f0-5fdd-989f-a3e54869d3fa", 00:19:26.458 "is_configured": true, 00:19:26.458 "data_offset": 256, 00:19:26.458 "data_size": 7936 00:19:26.458 } 00:19:26.458 ] 00:19:26.458 }' 00:19:26.458 03:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86692 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86692 ']' 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86692 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:26.458 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86692 00:19:26.716 killing process with pid 86692 00:19:26.717 Received shutdown signal, test time was about 60.000000 seconds 00:19:26.717 00:19:26.717 Latency(us) 00:19:26.717 [2024-11-05T03:30:40.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.717 [2024-11-05T03:30:40.356Z] =================================================================================================================== 00:19:26.717 [2024-11-05T03:30:40.356Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.717 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:26.717 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:26.717 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86692' 00:19:26.717 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86692 00:19:26.717 [2024-11-05 03:30:40.117991] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.717 03:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86692 00:19:26.717 [2024-11-05 03:30:40.118155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.717 [2024-11-05 03:30:40.118265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.717 [2024-11-05 03:30:40.118284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:26.976 [2024-11-05 03:30:40.363321] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.912 03:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:27.912 00:19:27.912 real 0m21.586s 00:19:27.912 user 0m29.290s 00:19:27.912 sys 0m2.562s 00:19:27.912 03:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:27.912 ************************************ 00:19:27.912 END TEST raid_rebuild_test_sb_4k 00:19:27.912 ************************************ 00:19:27.912 03:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.912 03:30:41 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:27.912 03:30:41 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:27.912 03:30:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:27.912 03:30:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:27.912 03:30:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.912 ************************************ 00:19:27.912 START TEST raid_state_function_test_sb_md_separate 00:19:27.912 ************************************ 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:27.912 Process raid pid: 87396 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87396 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87396' 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87396 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87396 ']' 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:27.912 03:30:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.912 [2024-11-05 03:30:41.489104] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:27.912 [2024-11-05 03:30:41.489581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.171 [2024-11-05 03:30:41.664032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.171 [2024-11-05 03:30:41.789248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.429 [2024-11-05 03:30:41.985170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.429 [2024-11-05 03:30:41.985531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.997 [2024-11-05 03:30:42.508952] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.997 [2024-11-05 03:30:42.509232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.997 [2024-11-05 03:30:42.509261] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.997 [2024-11-05 03:30:42.509281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.997 "name": "Existed_Raid", 00:19:28.997 "uuid": "f501876b-b00f-4bbf-b28a-1150e7a42e4a", 00:19:28.997 "strip_size_kb": 0, 00:19:28.997 "state": "configuring", 00:19:28.997 "raid_level": "raid1", 00:19:28.997 "superblock": true, 00:19:28.997 "num_base_bdevs": 2, 00:19:28.997 "num_base_bdevs_discovered": 0, 00:19:28.997 "num_base_bdevs_operational": 2, 00:19:28.997 "base_bdevs_list": [ 00:19:28.997 { 00:19:28.997 "name": "BaseBdev1", 00:19:28.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.997 "is_configured": false, 00:19:28.997 "data_offset": 0, 00:19:28.997 "data_size": 0 00:19:28.997 }, 00:19:28.997 { 00:19:28.997 "name": "BaseBdev2", 00:19:28.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.997 "is_configured": false, 00:19:28.997 "data_offset": 0, 00:19:28.997 "data_size": 0 00:19:28.997 } 00:19:28.997 ] 00:19:28.997 }' 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.997 03:30:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.564 [2024-11-05 03:30:43.025069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.564 [2024-11-05 03:30:43.025109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.564 [2024-11-05 03:30:43.037039] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.564 [2024-11-05 03:30:43.037266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.564 [2024-11-05 03:30:43.037473] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.564 [2024-11-05 03:30:43.037632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.564 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.565 [2024-11-05 03:30:43.084601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.565 BaseBdev1 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.565 [ 00:19:29.565 { 00:19:29.565 "name": "BaseBdev1", 00:19:29.565 "aliases": [ 00:19:29.565 "b1f3cba6-183b-40f3-bead-2c91c09fe986" 00:19:29.565 ], 00:19:29.565 "product_name": "Malloc disk", 00:19:29.565 "block_size": 4096, 00:19:29.565 "num_blocks": 8192, 00:19:29.565 "uuid": "b1f3cba6-183b-40f3-bead-2c91c09fe986", 00:19:29.565 "md_size": 32, 00:19:29.565 "md_interleave": false, 00:19:29.565 "dif_type": 0, 00:19:29.565 "assigned_rate_limits": { 00:19:29.565 "rw_ios_per_sec": 0, 00:19:29.565 "rw_mbytes_per_sec": 0, 00:19:29.565 "r_mbytes_per_sec": 0, 00:19:29.565 "w_mbytes_per_sec": 0 00:19:29.565 }, 00:19:29.565 "claimed": true, 00:19:29.565 "claim_type": "exclusive_write", 00:19:29.565 "zoned": false, 00:19:29.565 "supported_io_types": { 00:19:29.565 "read": true, 00:19:29.565 "write": true, 00:19:29.565 "unmap": true, 00:19:29.565 "flush": true, 00:19:29.565 "reset": true, 00:19:29.565 "nvme_admin": false, 00:19:29.565 "nvme_io": false, 00:19:29.565 "nvme_io_md": false, 00:19:29.565 "write_zeroes": true, 00:19:29.565 "zcopy": true, 00:19:29.565 "get_zone_info": false, 00:19:29.565 "zone_management": false, 00:19:29.565 "zone_append": false, 00:19:29.565 "compare": false, 00:19:29.565 "compare_and_write": false, 00:19:29.565 "abort": true, 00:19:29.565 "seek_hole": false, 00:19:29.565 "seek_data": false, 00:19:29.565 "copy": true, 00:19:29.565 "nvme_iov_md": false 00:19:29.565 }, 00:19:29.565 "memory_domains": [ 00:19:29.565 { 00:19:29.565 "dma_device_id": "system", 00:19:29.565 "dma_device_type": 1 00:19:29.565 }, 00:19:29.565 { 00:19:29.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.565 "dma_device_type": 2 00:19:29.565 } 00:19:29.565 ], 00:19:29.565 "driver_specific": {} 00:19:29.565 } 00:19:29.565 ] 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.565 "name": "Existed_Raid", 00:19:29.565 "uuid": "34866249-9504-42f7-99b1-3be6b83f1374", 00:19:29.565 "strip_size_kb": 0, 00:19:29.565 "state": "configuring", 00:19:29.565 "raid_level": "raid1", 00:19:29.565 "superblock": true, 00:19:29.565 "num_base_bdevs": 2, 00:19:29.565 "num_base_bdevs_discovered": 1, 00:19:29.565 "num_base_bdevs_operational": 2, 00:19:29.565 "base_bdevs_list": [ 00:19:29.565 { 00:19:29.565 "name": "BaseBdev1", 00:19:29.565 "uuid": "b1f3cba6-183b-40f3-bead-2c91c09fe986", 00:19:29.565 "is_configured": true, 00:19:29.565 "data_offset": 256, 00:19:29.565 "data_size": 7936 00:19:29.565 }, 00:19:29.565 { 00:19:29.565 "name": "BaseBdev2", 00:19:29.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.565 "is_configured": false, 00:19:29.565 "data_offset": 0, 00:19:29.565 "data_size": 0 00:19:29.565 } 00:19:29.565 ] 00:19:29.565 }' 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.565 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.132 [2024-11-05 03:30:43.660961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.132 [2024-11-05 03:30:43.661017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.132 [2024-11-05 03:30:43.668925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.132 [2024-11-05 03:30:43.671595] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.132 [2024-11-05 03:30:43.671905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.132 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.133 "name": "Existed_Raid", 00:19:30.133 "uuid": "16eeca62-53d6-448c-9775-e2dc6fdbc2a7", 00:19:30.133 "strip_size_kb": 0, 00:19:30.133 "state": "configuring", 00:19:30.133 "raid_level": "raid1", 00:19:30.133 "superblock": true, 00:19:30.133 "num_base_bdevs": 2, 00:19:30.133 "num_base_bdevs_discovered": 1, 00:19:30.133 "num_base_bdevs_operational": 2, 00:19:30.133 "base_bdevs_list": [ 00:19:30.133 { 00:19:30.133 "name": "BaseBdev1", 00:19:30.133 "uuid": "b1f3cba6-183b-40f3-bead-2c91c09fe986", 00:19:30.133 "is_configured": true, 00:19:30.133 "data_offset": 256, 00:19:30.133 "data_size": 7936 00:19:30.133 }, 00:19:30.133 { 00:19:30.133 "name": "BaseBdev2", 00:19:30.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.133 "is_configured": false, 00:19:30.133 "data_offset": 0, 00:19:30.133 "data_size": 0 00:19:30.133 } 00:19:30.133 ] 00:19:30.133 }' 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.133 03:30:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.700 [2024-11-05 03:30:44.231785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.700 [2024-11-05 03:30:44.232074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:30.700 [2024-11-05 03:30:44.232093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:30.700 [2024-11-05 03:30:44.232213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:30.700 [2024-11-05 03:30:44.232421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:30.700 [2024-11-05 03:30:44.232458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:30.700 BaseBdev2 00:19:30.700 [2024-11-05 03:30:44.232583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:30.700 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.701 [ 00:19:30.701 { 00:19:30.701 "name": "BaseBdev2", 00:19:30.701 "aliases": [ 00:19:30.701 "da6f6cf5-065b-44f9-8d5c-98798be3eded" 00:19:30.701 ], 00:19:30.701 "product_name": "Malloc disk", 00:19:30.701 "block_size": 4096, 00:19:30.701 "num_blocks": 8192, 00:19:30.701 "uuid": "da6f6cf5-065b-44f9-8d5c-98798be3eded", 00:19:30.701 "md_size": 32, 00:19:30.701 "md_interleave": false, 00:19:30.701 "dif_type": 0, 00:19:30.701 "assigned_rate_limits": { 00:19:30.701 "rw_ios_per_sec": 0, 00:19:30.701 "rw_mbytes_per_sec": 0, 00:19:30.701 "r_mbytes_per_sec": 0, 00:19:30.701 "w_mbytes_per_sec": 0 00:19:30.701 }, 00:19:30.701 "claimed": true, 00:19:30.701 "claim_type": "exclusive_write", 00:19:30.701 "zoned": false, 00:19:30.701 "supported_io_types": { 00:19:30.701 "read": true, 00:19:30.701 "write": true, 00:19:30.701 "unmap": true, 00:19:30.701 "flush": true, 00:19:30.701 "reset": true, 00:19:30.701 "nvme_admin": false, 00:19:30.701 "nvme_io": false, 00:19:30.701 "nvme_io_md": false, 00:19:30.701 "write_zeroes": true, 00:19:30.701 "zcopy": true, 00:19:30.701 "get_zone_info": false, 00:19:30.701 "zone_management": false, 00:19:30.701 "zone_append": false, 00:19:30.701 "compare": false, 00:19:30.701 "compare_and_write": false, 00:19:30.701 "abort": true, 00:19:30.701 "seek_hole": false, 00:19:30.701 "seek_data": false, 00:19:30.701 "copy": true, 00:19:30.701 "nvme_iov_md": false 00:19:30.701 }, 00:19:30.701 "memory_domains": [ 00:19:30.701 { 00:19:30.701 "dma_device_id": "system", 00:19:30.701 "dma_device_type": 1 00:19:30.701 }, 00:19:30.701 { 00:19:30.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.701 "dma_device_type": 2 00:19:30.701 } 00:19:30.701 ], 00:19:30.701 "driver_specific": {} 00:19:30.701 } 00:19:30.701 ] 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.701 "name": "Existed_Raid", 00:19:30.701 "uuid": "16eeca62-53d6-448c-9775-e2dc6fdbc2a7", 00:19:30.701 "strip_size_kb": 0, 00:19:30.701 "state": "online", 00:19:30.701 "raid_level": "raid1", 00:19:30.701 "superblock": true, 00:19:30.701 "num_base_bdevs": 2, 00:19:30.701 "num_base_bdevs_discovered": 2, 00:19:30.701 "num_base_bdevs_operational": 2, 00:19:30.701 "base_bdevs_list": [ 00:19:30.701 { 00:19:30.701 "name": "BaseBdev1", 00:19:30.701 "uuid": "b1f3cba6-183b-40f3-bead-2c91c09fe986", 00:19:30.701 "is_configured": true, 00:19:30.701 "data_offset": 256, 00:19:30.701 "data_size": 7936 00:19:30.701 }, 00:19:30.701 { 00:19:30.701 "name": "BaseBdev2", 00:19:30.701 "uuid": "da6f6cf5-065b-44f9-8d5c-98798be3eded", 00:19:30.701 "is_configured": true, 00:19:30.701 "data_offset": 256, 00:19:30.701 "data_size": 7936 00:19:30.701 } 00:19:30.701 ] 00:19:30.701 }' 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.701 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:31.268 [2024-11-05 03:30:44.816516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.268 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:31.268 "name": "Existed_Raid", 00:19:31.268 "aliases": [ 00:19:31.268 "16eeca62-53d6-448c-9775-e2dc6fdbc2a7" 00:19:31.268 ], 00:19:31.268 "product_name": "Raid Volume", 00:19:31.268 "block_size": 4096, 00:19:31.268 "num_blocks": 7936, 00:19:31.268 "uuid": "16eeca62-53d6-448c-9775-e2dc6fdbc2a7", 00:19:31.268 "md_size": 32, 00:19:31.268 "md_interleave": false, 00:19:31.268 "dif_type": 0, 00:19:31.268 "assigned_rate_limits": { 00:19:31.268 "rw_ios_per_sec": 0, 00:19:31.268 "rw_mbytes_per_sec": 0, 00:19:31.268 "r_mbytes_per_sec": 0, 00:19:31.268 "w_mbytes_per_sec": 0 00:19:31.268 }, 00:19:31.268 "claimed": false, 00:19:31.268 "zoned": false, 00:19:31.268 "supported_io_types": { 00:19:31.268 "read": true, 00:19:31.268 "write": true, 00:19:31.268 "unmap": false, 00:19:31.268 "flush": false, 00:19:31.268 "reset": true, 00:19:31.268 "nvme_admin": false, 00:19:31.268 "nvme_io": false, 00:19:31.268 "nvme_io_md": false, 00:19:31.268 "write_zeroes": true, 00:19:31.268 "zcopy": false, 00:19:31.268 "get_zone_info": false, 00:19:31.268 "zone_management": false, 00:19:31.268 "zone_append": false, 00:19:31.268 "compare": false, 00:19:31.268 "compare_and_write": false, 00:19:31.268 "abort": false, 00:19:31.268 "seek_hole": false, 00:19:31.268 "seek_data": false, 00:19:31.268 "copy": false, 00:19:31.268 "nvme_iov_md": false 00:19:31.269 }, 00:19:31.269 "memory_domains": [ 00:19:31.269 { 00:19:31.269 "dma_device_id": "system", 00:19:31.269 "dma_device_type": 1 00:19:31.269 }, 00:19:31.269 { 00:19:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.269 "dma_device_type": 2 00:19:31.269 }, 00:19:31.269 { 00:19:31.269 "dma_device_id": "system", 00:19:31.269 "dma_device_type": 1 00:19:31.269 }, 00:19:31.269 { 00:19:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.269 "dma_device_type": 2 00:19:31.269 } 00:19:31.269 ], 00:19:31.269 "driver_specific": { 00:19:31.269 "raid": { 00:19:31.269 "uuid": "16eeca62-53d6-448c-9775-e2dc6fdbc2a7", 00:19:31.269 "strip_size_kb": 0, 00:19:31.269 "state": "online", 00:19:31.269 "raid_level": "raid1", 00:19:31.269 "superblock": true, 00:19:31.269 "num_base_bdevs": 2, 00:19:31.269 "num_base_bdevs_discovered": 2, 00:19:31.269 "num_base_bdevs_operational": 2, 00:19:31.269 "base_bdevs_list": [ 00:19:31.269 { 00:19:31.269 "name": "BaseBdev1", 00:19:31.269 "uuid": "b1f3cba6-183b-40f3-bead-2c91c09fe986", 00:19:31.269 "is_configured": true, 00:19:31.269 "data_offset": 256, 00:19:31.269 "data_size": 7936 00:19:31.269 }, 00:19:31.269 { 00:19:31.269 "name": "BaseBdev2", 00:19:31.269 "uuid": "da6f6cf5-065b-44f9-8d5c-98798be3eded", 00:19:31.269 "is_configured": true, 00:19:31.269 "data_offset": 256, 00:19:31.269 "data_size": 7936 00:19:31.269 } 00:19:31.269 ] 00:19:31.269 } 00:19:31.269 } 00:19:31.269 }' 00:19:31.269 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:31.528 BaseBdev2' 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.528 03:30:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.528 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.528 [2024-11-05 03:30:45.084177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.787 "name": "Existed_Raid", 00:19:31.787 "uuid": "16eeca62-53d6-448c-9775-e2dc6fdbc2a7", 00:19:31.787 "strip_size_kb": 0, 00:19:31.787 "state": "online", 00:19:31.787 "raid_level": "raid1", 00:19:31.787 "superblock": true, 00:19:31.787 "num_base_bdevs": 2, 00:19:31.787 "num_base_bdevs_discovered": 1, 00:19:31.787 "num_base_bdevs_operational": 1, 00:19:31.787 "base_bdevs_list": [ 00:19:31.787 { 00:19:31.787 "name": null, 00:19:31.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.787 "is_configured": false, 00:19:31.787 "data_offset": 0, 00:19:31.787 "data_size": 7936 00:19:31.787 }, 00:19:31.787 { 00:19:31.787 "name": "BaseBdev2", 00:19:31.787 "uuid": "da6f6cf5-065b-44f9-8d5c-98798be3eded", 00:19:31.787 "is_configured": true, 00:19:31.787 "data_offset": 256, 00:19:31.787 "data_size": 7936 00:19:31.787 } 00:19:31.787 ] 00:19:31.787 }' 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.787 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.354 [2024-11-05 03:30:45.763300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:32.354 [2024-11-05 03:30:45.763486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.354 [2024-11-05 03:30:45.846571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.354 [2024-11-05 03:30:45.846633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.354 [2024-11-05 03:30:45.846652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87396 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87396 ']' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87396 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87396 00:19:32.354 killing process with pid 87396 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87396' 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87396 00:19:32.354 [2024-11-05 03:30:45.937564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.354 03:30:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87396 00:19:32.354 [2024-11-05 03:30:45.952498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:33.730 ************************************ 00:19:33.730 END TEST raid_state_function_test_sb_md_separate 00:19:33.730 ************************************ 00:19:33.730 03:30:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:33.730 00:19:33.730 real 0m5.535s 00:19:33.730 user 0m8.428s 00:19:33.730 sys 0m0.803s 00:19:33.730 03:30:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.730 03:30:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.730 03:30:46 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:33.730 03:30:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:33.730 03:30:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:33.730 03:30:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.730 ************************************ 00:19:33.730 START TEST raid_superblock_test_md_separate 00:19:33.730 ************************************ 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87656 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87656 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87656 ']' 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.730 03:30:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.730 [2024-11-05 03:30:47.095088] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:33.730 [2024-11-05 03:30:47.095663] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87656 ] 00:19:33.730 [2024-11-05 03:30:47.281575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.989 [2024-11-05 03:30:47.404073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.989 [2024-11-05 03:30:47.605913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.989 [2024-11-05 03:30:47.606235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.556 malloc1 00:19:34.556 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.557 [2024-11-05 03:30:48.136010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.557 [2024-11-05 03:30:48.136089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.557 [2024-11-05 03:30:48.136121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:34.557 [2024-11-05 03:30:48.136136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.557 [2024-11-05 03:30:48.138866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.557 [2024-11-05 03:30:48.138908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.557 pt1 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.557 malloc2 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.557 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.557 [2024-11-05 03:30:48.187484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.557 [2024-11-05 03:30:48.187546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.557 [2024-11-05 03:30:48.187575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:34.557 [2024-11-05 03:30:48.187590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.557 [2024-11-05 03:30:48.190152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.557 [2024-11-05 03:30:48.190409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.557 pt2 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.816 [2024-11-05 03:30:48.199535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:34.816 [2024-11-05 03:30:48.202204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:34.816 [2024-11-05 03:30:48.202533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:34.816 [2024-11-05 03:30:48.202554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:34.816 [2024-11-05 03:30:48.202664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:34.816 [2024-11-05 03:30:48.202848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:34.816 [2024-11-05 03:30:48.202867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:34.816 [2024-11-05 03:30:48.203002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.816 "name": "raid_bdev1", 00:19:34.816 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:34.816 "strip_size_kb": 0, 00:19:34.816 "state": "online", 00:19:34.816 "raid_level": "raid1", 00:19:34.816 "superblock": true, 00:19:34.816 "num_base_bdevs": 2, 00:19:34.816 "num_base_bdevs_discovered": 2, 00:19:34.816 "num_base_bdevs_operational": 2, 00:19:34.816 "base_bdevs_list": [ 00:19:34.816 { 00:19:34.816 "name": "pt1", 00:19:34.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.816 "is_configured": true, 00:19:34.816 "data_offset": 256, 00:19:34.816 "data_size": 7936 00:19:34.816 }, 00:19:34.816 { 00:19:34.816 "name": "pt2", 00:19:34.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.816 "is_configured": true, 00:19:34.816 "data_offset": 256, 00:19:34.816 "data_size": 7936 00:19:34.816 } 00:19:34.816 ] 00:19:34.816 }' 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.816 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.384 [2024-11-05 03:30:48.748101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.384 "name": "raid_bdev1", 00:19:35.384 "aliases": [ 00:19:35.384 "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a" 00:19:35.384 ], 00:19:35.384 "product_name": "Raid Volume", 00:19:35.384 "block_size": 4096, 00:19:35.384 "num_blocks": 7936, 00:19:35.384 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:35.384 "md_size": 32, 00:19:35.384 "md_interleave": false, 00:19:35.384 "dif_type": 0, 00:19:35.384 "assigned_rate_limits": { 00:19:35.384 "rw_ios_per_sec": 0, 00:19:35.384 "rw_mbytes_per_sec": 0, 00:19:35.384 "r_mbytes_per_sec": 0, 00:19:35.384 "w_mbytes_per_sec": 0 00:19:35.384 }, 00:19:35.384 "claimed": false, 00:19:35.384 "zoned": false, 00:19:35.384 "supported_io_types": { 00:19:35.384 "read": true, 00:19:35.384 "write": true, 00:19:35.384 "unmap": false, 00:19:35.384 "flush": false, 00:19:35.384 "reset": true, 00:19:35.384 "nvme_admin": false, 00:19:35.384 "nvme_io": false, 00:19:35.384 "nvme_io_md": false, 00:19:35.384 "write_zeroes": true, 00:19:35.384 "zcopy": false, 00:19:35.384 "get_zone_info": false, 00:19:35.384 "zone_management": false, 00:19:35.384 "zone_append": false, 00:19:35.384 "compare": false, 00:19:35.384 "compare_and_write": false, 00:19:35.384 "abort": false, 00:19:35.384 "seek_hole": false, 00:19:35.384 "seek_data": false, 00:19:35.384 "copy": false, 00:19:35.384 "nvme_iov_md": false 00:19:35.384 }, 00:19:35.384 "memory_domains": [ 00:19:35.384 { 00:19:35.384 "dma_device_id": "system", 00:19:35.384 "dma_device_type": 1 00:19:35.384 }, 00:19:35.384 { 00:19:35.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.384 "dma_device_type": 2 00:19:35.384 }, 00:19:35.384 { 00:19:35.384 "dma_device_id": "system", 00:19:35.384 "dma_device_type": 1 00:19:35.384 }, 00:19:35.384 { 00:19:35.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.384 "dma_device_type": 2 00:19:35.384 } 00:19:35.384 ], 00:19:35.384 "driver_specific": { 00:19:35.384 "raid": { 00:19:35.384 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:35.384 "strip_size_kb": 0, 00:19:35.384 "state": "online", 00:19:35.384 "raid_level": "raid1", 00:19:35.384 "superblock": true, 00:19:35.384 "num_base_bdevs": 2, 00:19:35.384 "num_base_bdevs_discovered": 2, 00:19:35.384 "num_base_bdevs_operational": 2, 00:19:35.384 "base_bdevs_list": [ 00:19:35.384 { 00:19:35.384 "name": "pt1", 00:19:35.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.384 "is_configured": true, 00:19:35.384 "data_offset": 256, 00:19:35.384 "data_size": 7936 00:19:35.384 }, 00:19:35.384 { 00:19:35.384 "name": "pt2", 00:19:35.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.384 "is_configured": true, 00:19:35.384 "data_offset": 256, 00:19:35.384 "data_size": 7936 00:19:35.384 } 00:19:35.384 ] 00:19:35.384 } 00:19:35.384 } 00:19:35.384 }' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:35.384 pt2' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 03:30:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.384 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.384 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.384 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.384 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.384 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:35.384 [2024-11-05 03:30:49.012097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6fcf6abc-77da-4ca0-a93b-2ae898c69d2a 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 6fcf6abc-77da-4ca0-a93b-2ae898c69d2a ']' 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 [2024-11-05 03:30:49.067786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:35.644 [2024-11-05 03:30:49.067810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.644 [2024-11-05 03:30:49.067906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.644 [2024-11-05 03:30:49.067981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:35.644 [2024-11-05 03:30:49.068000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 [2024-11-05 03:30:49.207854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:35.644 [2024-11-05 03:30:49.210482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:35.644 [2024-11-05 03:30:49.210588] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:35.644 [2024-11-05 03:30:49.210665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:35.644 [2024-11-05 03:30:49.210692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:35.644 [2024-11-05 03:30:49.210708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:35.644 request: 00:19:35.644 { 00:19:35.644 "name": "raid_bdev1", 00:19:35.644 "raid_level": "raid1", 00:19:35.644 "base_bdevs": [ 00:19:35.644 "malloc1", 00:19:35.644 "malloc2" 00:19:35.644 ], 00:19:35.644 "superblock": false, 00:19:35.644 "method": "bdev_raid_create", 00:19:35.644 "req_id": 1 00:19:35.644 } 00:19:35.644 Got JSON-RPC error response 00:19:35.644 response: 00:19:35.644 { 00:19:35.644 "code": -17, 00:19:35.644 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:35.644 } 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.644 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.644 [2024-11-05 03:30:49.275859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:35.644 [2024-11-05 03:30:49.276128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.644 [2024-11-05 03:30:49.276162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:35.644 [2024-11-05 03:30:49.276180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.644 [2024-11-05 03:30:49.278955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.644 [2024-11-05 03:30:49.279018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:35.644 [2024-11-05 03:30:49.279072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:35.644 [2024-11-05 03:30:49.279137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:35.644 pt1 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.903 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.903 "name": "raid_bdev1", 00:19:35.903 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:35.903 "strip_size_kb": 0, 00:19:35.903 "state": "configuring", 00:19:35.903 "raid_level": "raid1", 00:19:35.903 "superblock": true, 00:19:35.903 "num_base_bdevs": 2, 00:19:35.903 "num_base_bdevs_discovered": 1, 00:19:35.903 "num_base_bdevs_operational": 2, 00:19:35.903 "base_bdevs_list": [ 00:19:35.903 { 00:19:35.903 "name": "pt1", 00:19:35.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.903 "is_configured": true, 00:19:35.903 "data_offset": 256, 00:19:35.903 "data_size": 7936 00:19:35.904 }, 00:19:35.904 { 00:19:35.904 "name": null, 00:19:35.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.904 "is_configured": false, 00:19:35.904 "data_offset": 256, 00:19:35.904 "data_size": 7936 00:19:35.904 } 00:19:35.904 ] 00:19:35.904 }' 00:19:35.904 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.904 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.470 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:36.470 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:36.470 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:36.470 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.470 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.471 [2024-11-05 03:30:49.816019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.471 [2024-11-05 03:30:49.816110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.471 [2024-11-05 03:30:49.816138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:36.471 [2024-11-05 03:30:49.816154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.471 [2024-11-05 03:30:49.816513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.471 [2024-11-05 03:30:49.816544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.471 [2024-11-05 03:30:49.816608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:36.471 [2024-11-05 03:30:49.816640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.471 [2024-11-05 03:30:49.816794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:36.471 [2024-11-05 03:30:49.816815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.471 [2024-11-05 03:30:49.816939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:36.471 [2024-11-05 03:30:49.817100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:36.471 [2024-11-05 03:30:49.817114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:36.471 [2024-11-05 03:30:49.817235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.471 pt2 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.471 "name": "raid_bdev1", 00:19:36.471 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:36.471 "strip_size_kb": 0, 00:19:36.471 "state": "online", 00:19:36.471 "raid_level": "raid1", 00:19:36.471 "superblock": true, 00:19:36.471 "num_base_bdevs": 2, 00:19:36.471 "num_base_bdevs_discovered": 2, 00:19:36.471 "num_base_bdevs_operational": 2, 00:19:36.471 "base_bdevs_list": [ 00:19:36.471 { 00:19:36.471 "name": "pt1", 00:19:36.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.471 "is_configured": true, 00:19:36.471 "data_offset": 256, 00:19:36.471 "data_size": 7936 00:19:36.471 }, 00:19:36.471 { 00:19:36.471 "name": "pt2", 00:19:36.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.471 "is_configured": true, 00:19:36.471 "data_offset": 256, 00:19:36.471 "data_size": 7936 00:19:36.471 } 00:19:36.471 ] 00:19:36.471 }' 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.471 03:30:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.730 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.730 [2024-11-05 03:30:50.360595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.989 "name": "raid_bdev1", 00:19:36.989 "aliases": [ 00:19:36.989 "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a" 00:19:36.989 ], 00:19:36.989 "product_name": "Raid Volume", 00:19:36.989 "block_size": 4096, 00:19:36.989 "num_blocks": 7936, 00:19:36.989 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:36.989 "md_size": 32, 00:19:36.989 "md_interleave": false, 00:19:36.989 "dif_type": 0, 00:19:36.989 "assigned_rate_limits": { 00:19:36.989 "rw_ios_per_sec": 0, 00:19:36.989 "rw_mbytes_per_sec": 0, 00:19:36.989 "r_mbytes_per_sec": 0, 00:19:36.989 "w_mbytes_per_sec": 0 00:19:36.989 }, 00:19:36.989 "claimed": false, 00:19:36.989 "zoned": false, 00:19:36.989 "supported_io_types": { 00:19:36.989 "read": true, 00:19:36.989 "write": true, 00:19:36.989 "unmap": false, 00:19:36.989 "flush": false, 00:19:36.989 "reset": true, 00:19:36.989 "nvme_admin": false, 00:19:36.989 "nvme_io": false, 00:19:36.989 "nvme_io_md": false, 00:19:36.989 "write_zeroes": true, 00:19:36.989 "zcopy": false, 00:19:36.989 "get_zone_info": false, 00:19:36.989 "zone_management": false, 00:19:36.989 "zone_append": false, 00:19:36.989 "compare": false, 00:19:36.989 "compare_and_write": false, 00:19:36.989 "abort": false, 00:19:36.989 "seek_hole": false, 00:19:36.989 "seek_data": false, 00:19:36.989 "copy": false, 00:19:36.989 "nvme_iov_md": false 00:19:36.989 }, 00:19:36.989 "memory_domains": [ 00:19:36.989 { 00:19:36.989 "dma_device_id": "system", 00:19:36.989 "dma_device_type": 1 00:19:36.989 }, 00:19:36.989 { 00:19:36.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.989 "dma_device_type": 2 00:19:36.989 }, 00:19:36.989 { 00:19:36.989 "dma_device_id": "system", 00:19:36.989 "dma_device_type": 1 00:19:36.989 }, 00:19:36.989 { 00:19:36.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.989 "dma_device_type": 2 00:19:36.989 } 00:19:36.989 ], 00:19:36.989 "driver_specific": { 00:19:36.989 "raid": { 00:19:36.989 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:36.989 "strip_size_kb": 0, 00:19:36.989 "state": "online", 00:19:36.989 "raid_level": "raid1", 00:19:36.989 "superblock": true, 00:19:36.989 "num_base_bdevs": 2, 00:19:36.989 "num_base_bdevs_discovered": 2, 00:19:36.989 "num_base_bdevs_operational": 2, 00:19:36.989 "base_bdevs_list": [ 00:19:36.989 { 00:19:36.989 "name": "pt1", 00:19:36.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.989 "is_configured": true, 00:19:36.989 "data_offset": 256, 00:19:36.989 "data_size": 7936 00:19:36.989 }, 00:19:36.989 { 00:19:36.989 "name": "pt2", 00:19:36.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.989 "is_configured": true, 00:19:36.989 "data_offset": 256, 00:19:36.989 "data_size": 7936 00:19:36.989 } 00:19:36.989 ] 00:19:36.989 } 00:19:36.989 } 00:19:36.989 }' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:36.989 pt2' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.989 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:37.248 [2024-11-05 03:30:50.636578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 6fcf6abc-77da-4ca0-a93b-2ae898c69d2a '!=' 6fcf6abc-77da-4ca0-a93b-2ae898c69d2a ']' 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.248 [2024-11-05 03:30:50.688264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.248 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.249 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.249 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.249 "name": "raid_bdev1", 00:19:37.249 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:37.249 "strip_size_kb": 0, 00:19:37.249 "state": "online", 00:19:37.249 "raid_level": "raid1", 00:19:37.249 "superblock": true, 00:19:37.249 "num_base_bdevs": 2, 00:19:37.249 "num_base_bdevs_discovered": 1, 00:19:37.249 "num_base_bdevs_operational": 1, 00:19:37.249 "base_bdevs_list": [ 00:19:37.249 { 00:19:37.249 "name": null, 00:19:37.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.249 "is_configured": false, 00:19:37.249 "data_offset": 0, 00:19:37.249 "data_size": 7936 00:19:37.249 }, 00:19:37.249 { 00:19:37.249 "name": "pt2", 00:19:37.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.249 "is_configured": true, 00:19:37.249 "data_offset": 256, 00:19:37.249 "data_size": 7936 00:19:37.249 } 00:19:37.249 ] 00:19:37.249 }' 00:19:37.249 03:30:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.249 03:30:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.816 [2024-11-05 03:30:51.232456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.816 [2024-11-05 03:30:51.232486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.816 [2024-11-05 03:30:51.232569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.816 [2024-11-05 03:30:51.232629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.816 [2024-11-05 03:30:51.232647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.816 [2024-11-05 03:30:51.300434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.816 [2024-11-05 03:30:51.300501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.816 [2024-11-05 03:30:51.300526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:37.816 [2024-11-05 03:30:51.300543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.816 [2024-11-05 03:30:51.303368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.816 [2024-11-05 03:30:51.303443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.816 [2024-11-05 03:30:51.303517] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:37.816 [2024-11-05 03:30:51.303572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.816 [2024-11-05 03:30:51.303685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:37.816 [2024-11-05 03:30:51.303713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:37.816 [2024-11-05 03:30:51.303809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:37.816 [2024-11-05 03:30:51.304027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:37.816 [2024-11-05 03:30:51.304050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:37.816 [2024-11-05 03:30:51.304170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.816 pt2 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.816 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.817 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.817 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.817 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.817 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.817 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.817 "name": "raid_bdev1", 00:19:37.817 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:37.817 "strip_size_kb": 0, 00:19:37.817 "state": "online", 00:19:37.817 "raid_level": "raid1", 00:19:37.817 "superblock": true, 00:19:37.817 "num_base_bdevs": 2, 00:19:37.817 "num_base_bdevs_discovered": 1, 00:19:37.817 "num_base_bdevs_operational": 1, 00:19:37.817 "base_bdevs_list": [ 00:19:37.817 { 00:19:37.817 "name": null, 00:19:37.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.817 "is_configured": false, 00:19:37.817 "data_offset": 256, 00:19:37.817 "data_size": 7936 00:19:37.817 }, 00:19:37.817 { 00:19:37.817 "name": "pt2", 00:19:37.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.817 "is_configured": true, 00:19:37.817 "data_offset": 256, 00:19:37.817 "data_size": 7936 00:19:37.817 } 00:19:37.817 ] 00:19:37.817 }' 00:19:37.817 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.817 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.385 [2024-11-05 03:30:51.844597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.385 [2024-11-05 03:30:51.844847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.385 [2024-11-05 03:30:51.844962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.385 [2024-11-05 03:30:51.845035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.385 [2024-11-05 03:30:51.845051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.385 [2024-11-05 03:30:51.904685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:38.385 [2024-11-05 03:30:51.904791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.385 [2024-11-05 03:30:51.904828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:38.385 [2024-11-05 03:30:51.904857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.385 [2024-11-05 03:30:51.907635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.385 [2024-11-05 03:30:51.907755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:38.385 [2024-11-05 03:30:51.907851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:38.385 [2024-11-05 03:30:51.907908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.385 [2024-11-05 03:30:51.908068] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:38.385 [2024-11-05 03:30:51.908084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.385 [2024-11-05 03:30:51.908104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:38.385 [2024-11-05 03:30:51.908170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.385 [2024-11-05 03:30:51.908251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:38.385 [2024-11-05 03:30:51.908281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:38.385 [2024-11-05 03:30:51.908433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:38.385 [2024-11-05 03:30:51.908575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:38.385 [2024-11-05 03:30:51.908594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:38.385 [2024-11-05 03:30:51.908768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.385 pt1 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.385 "name": "raid_bdev1", 00:19:38.385 "uuid": "6fcf6abc-77da-4ca0-a93b-2ae898c69d2a", 00:19:38.385 "strip_size_kb": 0, 00:19:38.385 "state": "online", 00:19:38.385 "raid_level": "raid1", 00:19:38.385 "superblock": true, 00:19:38.385 "num_base_bdevs": 2, 00:19:38.385 "num_base_bdevs_discovered": 1, 00:19:38.385 "num_base_bdevs_operational": 1, 00:19:38.385 "base_bdevs_list": [ 00:19:38.385 { 00:19:38.385 "name": null, 00:19:38.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.385 "is_configured": false, 00:19:38.385 "data_offset": 256, 00:19:38.385 "data_size": 7936 00:19:38.385 }, 00:19:38.385 { 00:19:38.385 "name": "pt2", 00:19:38.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.385 "is_configured": true, 00:19:38.385 "data_offset": 256, 00:19:38.385 "data_size": 7936 00:19:38.385 } 00:19:38.385 ] 00:19:38.385 }' 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.385 03:30:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.952 03:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:38.952 03:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:38.952 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.952 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.952 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.952 03:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.953 [2024-11-05 03:30:52.497290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 6fcf6abc-77da-4ca0-a93b-2ae898c69d2a '!=' 6fcf6abc-77da-4ca0-a93b-2ae898c69d2a ']' 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87656 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87656 ']' 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87656 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87656 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:38.953 killing process with pid 87656 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87656' 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87656 00:19:38.953 [2024-11-05 03:30:52.570927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.953 [2024-11-05 03:30:52.571021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.953 03:30:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87656 00:19:38.953 [2024-11-05 03:30:52.571080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.953 [2024-11-05 03:30:52.571103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:39.211 [2024-11-05 03:30:52.764843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.147 03:30:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:40.147 00:19:40.147 real 0m6.756s 00:19:40.147 user 0m10.773s 00:19:40.147 sys 0m0.982s 00:19:40.147 03:30:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:40.147 ************************************ 00:19:40.147 END TEST raid_superblock_test_md_separate 00:19:40.147 ************************************ 00:19:40.147 03:30:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.406 03:30:53 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:40.406 03:30:53 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:40.406 03:30:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:40.406 03:30:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:40.406 03:30:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.406 ************************************ 00:19:40.406 START TEST raid_rebuild_test_sb_md_separate 00:19:40.406 ************************************ 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87980 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87980 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87980 ']' 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.406 03:30:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.406 [2024-11-05 03:30:53.924216] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:40.406 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:40.406 Zero copy mechanism will not be used. 00:19:40.406 [2024-11-05 03:30:53.924801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87980 ] 00:19:40.665 [2024-11-05 03:30:54.114828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.665 [2024-11-05 03:30:54.240694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.923 [2024-11-05 03:30:54.430416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.923 [2024-11-05 03:30:54.430477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.489 03:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:41.489 03:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:41.489 03:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:41.489 03:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:41.489 03:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.489 03:30:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.489 BaseBdev1_malloc 00:19:41.489 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.489 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:41.489 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.490 [2024-11-05 03:30:55.033708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:41.490 [2024-11-05 03:30:55.033803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.490 [2024-11-05 03:30:55.033836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:41.490 [2024-11-05 03:30:55.033854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.490 [2024-11-05 03:30:55.036623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.490 [2024-11-05 03:30:55.036670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:41.490 BaseBdev1 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.490 BaseBdev2_malloc 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.490 [2024-11-05 03:30:55.085285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:41.490 [2024-11-05 03:30:55.085429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.490 [2024-11-05 03:30:55.085463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:41.490 [2024-11-05 03:30:55.085484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.490 [2024-11-05 03:30:55.088150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.490 [2024-11-05 03:30:55.088215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:41.490 BaseBdev2 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.490 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.748 spare_malloc 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.748 spare_delay 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.748 [2024-11-05 03:30:55.154781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:41.748 [2024-11-05 03:30:55.154867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.748 [2024-11-05 03:30:55.154896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:41.748 [2024-11-05 03:30:55.154922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.748 [2024-11-05 03:30:55.157515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.748 [2024-11-05 03:30:55.157751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:41.748 spare 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.748 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.748 [2024-11-05 03:30:55.166827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.748 [2024-11-05 03:30:55.169391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.748 [2024-11-05 03:30:55.169612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:41.748 [2024-11-05 03:30:55.169635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:41.748 [2024-11-05 03:30:55.169733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:41.749 [2024-11-05 03:30:55.169933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:41.749 [2024-11-05 03:30:55.169949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:41.749 [2024-11-05 03:30:55.170069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.749 "name": "raid_bdev1", 00:19:41.749 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:41.749 "strip_size_kb": 0, 00:19:41.749 "state": "online", 00:19:41.749 "raid_level": "raid1", 00:19:41.749 "superblock": true, 00:19:41.749 "num_base_bdevs": 2, 00:19:41.749 "num_base_bdevs_discovered": 2, 00:19:41.749 "num_base_bdevs_operational": 2, 00:19:41.749 "base_bdevs_list": [ 00:19:41.749 { 00:19:41.749 "name": "BaseBdev1", 00:19:41.749 "uuid": "651a65b3-a140-5bdc-8628-19b57ede3726", 00:19:41.749 "is_configured": true, 00:19:41.749 "data_offset": 256, 00:19:41.749 "data_size": 7936 00:19:41.749 }, 00:19:41.749 { 00:19:41.749 "name": "BaseBdev2", 00:19:41.749 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:41.749 "is_configured": true, 00:19:41.749 "data_offset": 256, 00:19:41.749 "data_size": 7936 00:19:41.749 } 00:19:41.749 ] 00:19:41.749 }' 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.749 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.316 [2024-11-05 03:30:55.667427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:42.316 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:42.317 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:42.317 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:42.317 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:42.317 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:42.317 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:42.317 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.317 03:30:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:42.576 [2024-11-05 03:30:55.995163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:42.576 /dev/nbd0 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.576 1+0 records in 00:19:42.576 1+0 records out 00:19:42.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295958 s, 13.8 MB/s 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:42.576 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:43.511 7936+0 records in 00:19:43.511 7936+0 records out 00:19:43.511 32505856 bytes (33 MB, 31 MiB) copied, 0.874892 s, 37.2 MB/s 00:19:43.511 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:43.511 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:43.511 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:43.511 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:43.511 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:43.511 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.511 03:30:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:43.772 [2024-11-05 03:30:57.230840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.772 [2024-11-05 03:30:57.250980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.772 "name": "raid_bdev1", 00:19:43.772 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:43.772 "strip_size_kb": 0, 00:19:43.772 "state": "online", 00:19:43.772 "raid_level": "raid1", 00:19:43.772 "superblock": true, 00:19:43.772 "num_base_bdevs": 2, 00:19:43.772 "num_base_bdevs_discovered": 1, 00:19:43.772 "num_base_bdevs_operational": 1, 00:19:43.772 "base_bdevs_list": [ 00:19:43.772 { 00:19:43.772 "name": null, 00:19:43.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.772 "is_configured": false, 00:19:43.772 "data_offset": 0, 00:19:43.772 "data_size": 7936 00:19:43.772 }, 00:19:43.772 { 00:19:43.772 "name": "BaseBdev2", 00:19:43.772 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:43.772 "is_configured": true, 00:19:43.772 "data_offset": 256, 00:19:43.772 "data_size": 7936 00:19:43.772 } 00:19:43.772 ] 00:19:43.772 }' 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.772 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.344 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:44.344 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.344 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.344 [2024-11-05 03:30:57.763102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.344 [2024-11-05 03:30:57.777712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:44.344 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.344 03:30:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:44.344 [2024-11-05 03:30:57.780481] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.278 "name": "raid_bdev1", 00:19:45.278 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:45.278 "strip_size_kb": 0, 00:19:45.278 "state": "online", 00:19:45.278 "raid_level": "raid1", 00:19:45.278 "superblock": true, 00:19:45.278 "num_base_bdevs": 2, 00:19:45.278 "num_base_bdevs_discovered": 2, 00:19:45.278 "num_base_bdevs_operational": 2, 00:19:45.278 "process": { 00:19:45.278 "type": "rebuild", 00:19:45.278 "target": "spare", 00:19:45.278 "progress": { 00:19:45.278 "blocks": 2560, 00:19:45.278 "percent": 32 00:19:45.278 } 00:19:45.278 }, 00:19:45.278 "base_bdevs_list": [ 00:19:45.278 { 00:19:45.278 "name": "spare", 00:19:45.278 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:45.278 "is_configured": true, 00:19:45.278 "data_offset": 256, 00:19:45.278 "data_size": 7936 00:19:45.278 }, 00:19:45.278 { 00:19:45.278 "name": "BaseBdev2", 00:19:45.278 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:45.278 "is_configured": true, 00:19:45.278 "data_offset": 256, 00:19:45.278 "data_size": 7936 00:19:45.278 } 00:19:45.278 ] 00:19:45.278 }' 00:19:45.278 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.537 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.537 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.537 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.537 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:45.537 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.537 03:30:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.537 [2024-11-05 03:30:58.994492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.537 [2024-11-05 03:30:59.089951] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:45.537 [2024-11-05 03:30:59.090048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.537 [2024-11-05 03:30:59.090076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.537 [2024-11-05 03:30:59.090090] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.537 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.538 "name": "raid_bdev1", 00:19:45.538 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:45.538 "strip_size_kb": 0, 00:19:45.538 "state": "online", 00:19:45.538 "raid_level": "raid1", 00:19:45.538 "superblock": true, 00:19:45.538 "num_base_bdevs": 2, 00:19:45.538 "num_base_bdevs_discovered": 1, 00:19:45.538 "num_base_bdevs_operational": 1, 00:19:45.538 "base_bdevs_list": [ 00:19:45.538 { 00:19:45.538 "name": null, 00:19:45.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.538 "is_configured": false, 00:19:45.538 "data_offset": 0, 00:19:45.538 "data_size": 7936 00:19:45.538 }, 00:19:45.538 { 00:19:45.538 "name": "BaseBdev2", 00:19:45.538 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:45.538 "is_configured": true, 00:19:45.538 "data_offset": 256, 00:19:45.538 "data_size": 7936 00:19:45.538 } 00:19:45.538 ] 00:19:45.538 }' 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.538 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.105 "name": "raid_bdev1", 00:19:46.105 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:46.105 "strip_size_kb": 0, 00:19:46.105 "state": "online", 00:19:46.105 "raid_level": "raid1", 00:19:46.105 "superblock": true, 00:19:46.105 "num_base_bdevs": 2, 00:19:46.105 "num_base_bdevs_discovered": 1, 00:19:46.105 "num_base_bdevs_operational": 1, 00:19:46.105 "base_bdevs_list": [ 00:19:46.105 { 00:19:46.105 "name": null, 00:19:46.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.105 "is_configured": false, 00:19:46.105 "data_offset": 0, 00:19:46.105 "data_size": 7936 00:19:46.105 }, 00:19:46.105 { 00:19:46.105 "name": "BaseBdev2", 00:19:46.105 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:46.105 "is_configured": true, 00:19:46.105 "data_offset": 256, 00:19:46.105 "data_size": 7936 00:19:46.105 } 00:19:46.105 ] 00:19:46.105 }' 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:46.105 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.363 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:46.363 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.363 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.363 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.363 [2024-11-05 03:30:59.792020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.363 [2024-11-05 03:30:59.804551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:46.363 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.363 03:30:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:46.363 [2024-11-05 03:30:59.807181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.298 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.298 "name": "raid_bdev1", 00:19:47.298 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:47.298 "strip_size_kb": 0, 00:19:47.298 "state": "online", 00:19:47.298 "raid_level": "raid1", 00:19:47.298 "superblock": true, 00:19:47.298 "num_base_bdevs": 2, 00:19:47.298 "num_base_bdevs_discovered": 2, 00:19:47.298 "num_base_bdevs_operational": 2, 00:19:47.298 "process": { 00:19:47.298 "type": "rebuild", 00:19:47.298 "target": "spare", 00:19:47.298 "progress": { 00:19:47.298 "blocks": 2560, 00:19:47.298 "percent": 32 00:19:47.298 } 00:19:47.298 }, 00:19:47.298 "base_bdevs_list": [ 00:19:47.298 { 00:19:47.298 "name": "spare", 00:19:47.298 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:47.298 "is_configured": true, 00:19:47.298 "data_offset": 256, 00:19:47.298 "data_size": 7936 00:19:47.299 }, 00:19:47.299 { 00:19:47.299 "name": "BaseBdev2", 00:19:47.299 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:47.299 "is_configured": true, 00:19:47.299 "data_offset": 256, 00:19:47.299 "data_size": 7936 00:19:47.299 } 00:19:47.299 ] 00:19:47.299 }' 00:19:47.299 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.299 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.299 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:47.558 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=762 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.558 03:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.558 03:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.558 "name": "raid_bdev1", 00:19:47.558 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:47.558 "strip_size_kb": 0, 00:19:47.558 "state": "online", 00:19:47.558 "raid_level": "raid1", 00:19:47.558 "superblock": true, 00:19:47.558 "num_base_bdevs": 2, 00:19:47.558 "num_base_bdevs_discovered": 2, 00:19:47.558 "num_base_bdevs_operational": 2, 00:19:47.558 "process": { 00:19:47.558 "type": "rebuild", 00:19:47.558 "target": "spare", 00:19:47.558 "progress": { 00:19:47.558 "blocks": 2816, 00:19:47.558 "percent": 35 00:19:47.558 } 00:19:47.558 }, 00:19:47.558 "base_bdevs_list": [ 00:19:47.558 { 00:19:47.558 "name": "spare", 00:19:47.558 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:47.558 "is_configured": true, 00:19:47.558 "data_offset": 256, 00:19:47.558 "data_size": 7936 00:19:47.558 }, 00:19:47.558 { 00:19:47.558 "name": "BaseBdev2", 00:19:47.558 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:47.558 "is_configured": true, 00:19:47.558 "data_offset": 256, 00:19:47.558 "data_size": 7936 00:19:47.558 } 00:19:47.558 ] 00:19:47.558 }' 00:19:47.558 03:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.558 03:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.558 03:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.558 03:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.558 03:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.934 "name": "raid_bdev1", 00:19:48.934 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:48.934 "strip_size_kb": 0, 00:19:48.934 "state": "online", 00:19:48.934 "raid_level": "raid1", 00:19:48.934 "superblock": true, 00:19:48.934 "num_base_bdevs": 2, 00:19:48.934 "num_base_bdevs_discovered": 2, 00:19:48.934 "num_base_bdevs_operational": 2, 00:19:48.934 "process": { 00:19:48.934 "type": "rebuild", 00:19:48.934 "target": "spare", 00:19:48.934 "progress": { 00:19:48.934 "blocks": 5888, 00:19:48.934 "percent": 74 00:19:48.934 } 00:19:48.934 }, 00:19:48.934 "base_bdevs_list": [ 00:19:48.934 { 00:19:48.934 "name": "spare", 00:19:48.934 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:48.934 "is_configured": true, 00:19:48.934 "data_offset": 256, 00:19:48.934 "data_size": 7936 00:19:48.934 }, 00:19:48.934 { 00:19:48.934 "name": "BaseBdev2", 00:19:48.934 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:48.934 "is_configured": true, 00:19:48.934 "data_offset": 256, 00:19:48.934 "data_size": 7936 00:19:48.934 } 00:19:48.934 ] 00:19:48.934 }' 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.934 03:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:49.502 [2024-11-05 03:31:02.928971] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:49.502 [2024-11-05 03:31:02.929050] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:49.502 [2024-11-05 03:31:02.929184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.762 "name": "raid_bdev1", 00:19:49.762 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:49.762 "strip_size_kb": 0, 00:19:49.762 "state": "online", 00:19:49.762 "raid_level": "raid1", 00:19:49.762 "superblock": true, 00:19:49.762 "num_base_bdevs": 2, 00:19:49.762 "num_base_bdevs_discovered": 2, 00:19:49.762 "num_base_bdevs_operational": 2, 00:19:49.762 "base_bdevs_list": [ 00:19:49.762 { 00:19:49.762 "name": "spare", 00:19:49.762 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:49.762 "is_configured": true, 00:19:49.762 "data_offset": 256, 00:19:49.762 "data_size": 7936 00:19:49.762 }, 00:19:49.762 { 00:19:49.762 "name": "BaseBdev2", 00:19:49.762 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:49.762 "is_configured": true, 00:19:49.762 "data_offset": 256, 00:19:49.762 "data_size": 7936 00:19:49.762 } 00:19:49.762 ] 00:19:49.762 }' 00:19:49.762 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.021 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.021 "name": "raid_bdev1", 00:19:50.021 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:50.021 "strip_size_kb": 0, 00:19:50.021 "state": "online", 00:19:50.022 "raid_level": "raid1", 00:19:50.022 "superblock": true, 00:19:50.022 "num_base_bdevs": 2, 00:19:50.022 "num_base_bdevs_discovered": 2, 00:19:50.022 "num_base_bdevs_operational": 2, 00:19:50.022 "base_bdevs_list": [ 00:19:50.022 { 00:19:50.022 "name": "spare", 00:19:50.022 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:50.022 "is_configured": true, 00:19:50.022 "data_offset": 256, 00:19:50.022 "data_size": 7936 00:19:50.022 }, 00:19:50.022 { 00:19:50.022 "name": "BaseBdev2", 00:19:50.022 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:50.022 "is_configured": true, 00:19:50.022 "data_offset": 256, 00:19:50.022 "data_size": 7936 00:19:50.022 } 00:19:50.022 ] 00:19:50.022 }' 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.022 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.281 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.281 "name": "raid_bdev1", 00:19:50.281 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:50.281 "strip_size_kb": 0, 00:19:50.281 "state": "online", 00:19:50.281 "raid_level": "raid1", 00:19:50.281 "superblock": true, 00:19:50.281 "num_base_bdevs": 2, 00:19:50.281 "num_base_bdevs_discovered": 2, 00:19:50.281 "num_base_bdevs_operational": 2, 00:19:50.281 "base_bdevs_list": [ 00:19:50.281 { 00:19:50.281 "name": "spare", 00:19:50.281 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:50.281 "is_configured": true, 00:19:50.281 "data_offset": 256, 00:19:50.281 "data_size": 7936 00:19:50.281 }, 00:19:50.281 { 00:19:50.281 "name": "BaseBdev2", 00:19:50.281 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:50.281 "is_configured": true, 00:19:50.281 "data_offset": 256, 00:19:50.281 "data_size": 7936 00:19:50.281 } 00:19:50.281 ] 00:19:50.281 }' 00:19:50.281 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.281 03:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.540 [2024-11-05 03:31:04.146525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.540 [2024-11-05 03:31:04.146738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.540 [2024-11-05 03:31:04.146874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.540 [2024-11-05 03:31:04.146965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.540 [2024-11-05 03:31:04.146981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.540 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.798 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:50.798 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:50.799 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:51.057 /dev/nbd0 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:51.057 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.058 1+0 records in 00:19:51.058 1+0 records out 00:19:51.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573794 s, 7.1 MB/s 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.058 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:51.329 /dev/nbd1 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.329 1+0 records in 00:19:51.329 1+0 records out 00:19:51.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048284 s, 8.5 MB/s 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.329 03:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:51.611 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:51.611 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:51.611 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:51.611 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.611 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:51.611 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.611 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.870 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.129 [2024-11-05 03:31:05.570938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.129 [2024-11-05 03:31:05.571015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.129 [2024-11-05 03:31:05.571047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:52.129 [2024-11-05 03:31:05.571061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.129 [2024-11-05 03:31:05.574109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.129 [2024-11-05 03:31:05.574279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.129 [2024-11-05 03:31:05.574489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:52.129 [2024-11-05 03:31:05.574676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.129 [2024-11-05 03:31:05.574865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.129 spare 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.129 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.129 [2024-11-05 03:31:05.675062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:52.129 [2024-11-05 03:31:05.675093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:52.129 [2024-11-05 03:31:05.675195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:52.129 [2024-11-05 03:31:05.675344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:52.130 [2024-11-05 03:31:05.675408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:52.130 [2024-11-05 03:31:05.675561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.130 "name": "raid_bdev1", 00:19:52.130 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:52.130 "strip_size_kb": 0, 00:19:52.130 "state": "online", 00:19:52.130 "raid_level": "raid1", 00:19:52.130 "superblock": true, 00:19:52.130 "num_base_bdevs": 2, 00:19:52.130 "num_base_bdevs_discovered": 2, 00:19:52.130 "num_base_bdevs_operational": 2, 00:19:52.130 "base_bdevs_list": [ 00:19:52.130 { 00:19:52.130 "name": "spare", 00:19:52.130 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:52.130 "is_configured": true, 00:19:52.130 "data_offset": 256, 00:19:52.130 "data_size": 7936 00:19:52.130 }, 00:19:52.130 { 00:19:52.130 "name": "BaseBdev2", 00:19:52.130 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:52.130 "is_configured": true, 00:19:52.130 "data_offset": 256, 00:19:52.130 "data_size": 7936 00:19:52.130 } 00:19:52.130 ] 00:19:52.130 }' 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.130 03:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.698 "name": "raid_bdev1", 00:19:52.698 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:52.698 "strip_size_kb": 0, 00:19:52.698 "state": "online", 00:19:52.698 "raid_level": "raid1", 00:19:52.698 "superblock": true, 00:19:52.698 "num_base_bdevs": 2, 00:19:52.698 "num_base_bdevs_discovered": 2, 00:19:52.698 "num_base_bdevs_operational": 2, 00:19:52.698 "base_bdevs_list": [ 00:19:52.698 { 00:19:52.698 "name": "spare", 00:19:52.698 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:52.698 "is_configured": true, 00:19:52.698 "data_offset": 256, 00:19:52.698 "data_size": 7936 00:19:52.698 }, 00:19:52.698 { 00:19:52.698 "name": "BaseBdev2", 00:19:52.698 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:52.698 "is_configured": true, 00:19:52.698 "data_offset": 256, 00:19:52.698 "data_size": 7936 00:19:52.698 } 00:19:52.698 ] 00:19:52.698 }' 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.698 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.957 [2024-11-05 03:31:06.407199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.957 "name": "raid_bdev1", 00:19:52.957 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:52.957 "strip_size_kb": 0, 00:19:52.957 "state": "online", 00:19:52.957 "raid_level": "raid1", 00:19:52.957 "superblock": true, 00:19:52.957 "num_base_bdevs": 2, 00:19:52.957 "num_base_bdevs_discovered": 1, 00:19:52.957 "num_base_bdevs_operational": 1, 00:19:52.957 "base_bdevs_list": [ 00:19:52.957 { 00:19:52.957 "name": null, 00:19:52.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.957 "is_configured": false, 00:19:52.957 "data_offset": 0, 00:19:52.957 "data_size": 7936 00:19:52.957 }, 00:19:52.957 { 00:19:52.957 "name": "BaseBdev2", 00:19:52.957 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:52.957 "is_configured": true, 00:19:52.957 "data_offset": 256, 00:19:52.957 "data_size": 7936 00:19:52.957 } 00:19:52.957 ] 00:19:52.957 }' 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.957 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.525 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.525 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.525 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.525 [2024-11-05 03:31:06.935470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.525 [2024-11-05 03:31:06.935717] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:53.525 [2024-11-05 03:31:06.935745] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:53.525 [2024-11-05 03:31:06.935799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.525 [2024-11-05 03:31:06.949421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:53.525 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.525 03:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:53.525 [2024-11-05 03:31:06.952191] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.462 03:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.462 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.462 "name": "raid_bdev1", 00:19:54.462 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:54.462 "strip_size_kb": 0, 00:19:54.462 "state": "online", 00:19:54.462 "raid_level": "raid1", 00:19:54.462 "superblock": true, 00:19:54.462 "num_base_bdevs": 2, 00:19:54.462 "num_base_bdevs_discovered": 2, 00:19:54.462 "num_base_bdevs_operational": 2, 00:19:54.462 "process": { 00:19:54.462 "type": "rebuild", 00:19:54.462 "target": "spare", 00:19:54.462 "progress": { 00:19:54.462 "blocks": 2560, 00:19:54.462 "percent": 32 00:19:54.462 } 00:19:54.462 }, 00:19:54.462 "base_bdevs_list": [ 00:19:54.462 { 00:19:54.462 "name": "spare", 00:19:54.462 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:54.462 "is_configured": true, 00:19:54.462 "data_offset": 256, 00:19:54.462 "data_size": 7936 00:19:54.462 }, 00:19:54.462 { 00:19:54.462 "name": "BaseBdev2", 00:19:54.462 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:54.462 "is_configured": true, 00:19:54.462 "data_offset": 256, 00:19:54.462 "data_size": 7936 00:19:54.462 } 00:19:54.462 ] 00:19:54.462 }' 00:19:54.462 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.462 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.462 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.722 [2024-11-05 03:31:08.129488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.722 [2024-11-05 03:31:08.160486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.722 [2024-11-05 03:31:08.160732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.722 [2024-11-05 03:31:08.160770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.722 [2024-11-05 03:31:08.160787] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.722 "name": "raid_bdev1", 00:19:54.722 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:54.722 "strip_size_kb": 0, 00:19:54.722 "state": "online", 00:19:54.722 "raid_level": "raid1", 00:19:54.722 "superblock": true, 00:19:54.722 "num_base_bdevs": 2, 00:19:54.722 "num_base_bdevs_discovered": 1, 00:19:54.722 "num_base_bdevs_operational": 1, 00:19:54.722 "base_bdevs_list": [ 00:19:54.722 { 00:19:54.722 "name": null, 00:19:54.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.722 "is_configured": false, 00:19:54.722 "data_offset": 0, 00:19:54.722 "data_size": 7936 00:19:54.722 }, 00:19:54.722 { 00:19:54.722 "name": "BaseBdev2", 00:19:54.722 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:54.722 "is_configured": true, 00:19:54.722 "data_offset": 256, 00:19:54.722 "data_size": 7936 00:19:54.722 } 00:19:54.722 ] 00:19:54.722 }' 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.722 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.290 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:55.290 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.290 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.290 [2024-11-05 03:31:08.668006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.290 [2024-11-05 03:31:08.668091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.290 [2024-11-05 03:31:08.668123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:55.290 [2024-11-05 03:31:08.668140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.290 [2024-11-05 03:31:08.668493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.290 [2024-11-05 03:31:08.668525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.290 [2024-11-05 03:31:08.668613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:55.290 [2024-11-05 03:31:08.668635] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:55.290 [2024-11-05 03:31:08.668650] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:55.290 [2024-11-05 03:31:08.668680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.290 [2024-11-05 03:31:08.680883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:55.290 spare 00:19:55.290 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.290 03:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:55.290 [2024-11-05 03:31:08.683605] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.226 "name": "raid_bdev1", 00:19:56.226 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:56.226 "strip_size_kb": 0, 00:19:56.226 "state": "online", 00:19:56.226 "raid_level": "raid1", 00:19:56.226 "superblock": true, 00:19:56.226 "num_base_bdevs": 2, 00:19:56.226 "num_base_bdevs_discovered": 2, 00:19:56.226 "num_base_bdevs_operational": 2, 00:19:56.226 "process": { 00:19:56.226 "type": "rebuild", 00:19:56.226 "target": "spare", 00:19:56.226 "progress": { 00:19:56.226 "blocks": 2560, 00:19:56.226 "percent": 32 00:19:56.226 } 00:19:56.226 }, 00:19:56.226 "base_bdevs_list": [ 00:19:56.226 { 00:19:56.226 "name": "spare", 00:19:56.226 "uuid": "bec270f5-09c9-5f87-ad2f-84c9f4cf48dd", 00:19:56.226 "is_configured": true, 00:19:56.226 "data_offset": 256, 00:19:56.226 "data_size": 7936 00:19:56.226 }, 00:19:56.226 { 00:19:56.226 "name": "BaseBdev2", 00:19:56.226 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:56.226 "is_configured": true, 00:19:56.226 "data_offset": 256, 00:19:56.226 "data_size": 7936 00:19:56.226 } 00:19:56.226 ] 00:19:56.226 }' 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.226 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.226 [2024-11-05 03:31:09.850878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.484 [2024-11-05 03:31:09.892035] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.484 [2024-11-05 03:31:09.892137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.484 [2024-11-05 03:31:09.892173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.484 [2024-11-05 03:31:09.892184] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.484 "name": "raid_bdev1", 00:19:56.484 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:56.484 "strip_size_kb": 0, 00:19:56.484 "state": "online", 00:19:56.484 "raid_level": "raid1", 00:19:56.484 "superblock": true, 00:19:56.484 "num_base_bdevs": 2, 00:19:56.484 "num_base_bdevs_discovered": 1, 00:19:56.484 "num_base_bdevs_operational": 1, 00:19:56.484 "base_bdevs_list": [ 00:19:56.484 { 00:19:56.484 "name": null, 00:19:56.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.484 "is_configured": false, 00:19:56.484 "data_offset": 0, 00:19:56.484 "data_size": 7936 00:19:56.484 }, 00:19:56.484 { 00:19:56.484 "name": "BaseBdev2", 00:19:56.484 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:56.484 "is_configured": true, 00:19:56.484 "data_offset": 256, 00:19:56.484 "data_size": 7936 00:19:56.484 } 00:19:56.484 ] 00:19:56.484 }' 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.484 03:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.052 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.052 "name": "raid_bdev1", 00:19:57.052 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:57.052 "strip_size_kb": 0, 00:19:57.052 "state": "online", 00:19:57.052 "raid_level": "raid1", 00:19:57.052 "superblock": true, 00:19:57.052 "num_base_bdevs": 2, 00:19:57.052 "num_base_bdevs_discovered": 1, 00:19:57.052 "num_base_bdevs_operational": 1, 00:19:57.052 "base_bdevs_list": [ 00:19:57.052 { 00:19:57.052 "name": null, 00:19:57.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.052 "is_configured": false, 00:19:57.052 "data_offset": 0, 00:19:57.052 "data_size": 7936 00:19:57.052 }, 00:19:57.052 { 00:19:57.052 "name": "BaseBdev2", 00:19:57.052 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:57.052 "is_configured": true, 00:19:57.052 "data_offset": 256, 00:19:57.052 "data_size": 7936 00:19:57.052 } 00:19:57.052 ] 00:19:57.052 }' 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.053 [2024-11-05 03:31:10.594378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:57.053 [2024-11-05 03:31:10.594438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.053 [2024-11-05 03:31:10.594473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:57.053 [2024-11-05 03:31:10.594488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.053 [2024-11-05 03:31:10.594791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.053 [2024-11-05 03:31:10.594819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.053 [2024-11-05 03:31:10.594892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:57.053 [2024-11-05 03:31:10.594911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:57.053 [2024-11-05 03:31:10.594928] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:57.053 [2024-11-05 03:31:10.594940] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:57.053 BaseBdev1 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.053 03:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.990 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.286 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.286 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.286 "name": "raid_bdev1", 00:19:58.286 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:58.286 "strip_size_kb": 0, 00:19:58.286 "state": "online", 00:19:58.286 "raid_level": "raid1", 00:19:58.286 "superblock": true, 00:19:58.286 "num_base_bdevs": 2, 00:19:58.286 "num_base_bdevs_discovered": 1, 00:19:58.286 "num_base_bdevs_operational": 1, 00:19:58.286 "base_bdevs_list": [ 00:19:58.286 { 00:19:58.286 "name": null, 00:19:58.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.286 "is_configured": false, 00:19:58.286 "data_offset": 0, 00:19:58.286 "data_size": 7936 00:19:58.286 }, 00:19:58.286 { 00:19:58.286 "name": "BaseBdev2", 00:19:58.286 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:58.286 "is_configured": true, 00:19:58.286 "data_offset": 256, 00:19:58.286 "data_size": 7936 00:19:58.286 } 00:19:58.286 ] 00:19:58.286 }' 00:19:58.286 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.286 03:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.571 "name": "raid_bdev1", 00:19:58.571 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:58.571 "strip_size_kb": 0, 00:19:58.571 "state": "online", 00:19:58.571 "raid_level": "raid1", 00:19:58.571 "superblock": true, 00:19:58.571 "num_base_bdevs": 2, 00:19:58.571 "num_base_bdevs_discovered": 1, 00:19:58.571 "num_base_bdevs_operational": 1, 00:19:58.571 "base_bdevs_list": [ 00:19:58.571 { 00:19:58.571 "name": null, 00:19:58.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.571 "is_configured": false, 00:19:58.571 "data_offset": 0, 00:19:58.571 "data_size": 7936 00:19:58.571 }, 00:19:58.571 { 00:19:58.571 "name": "BaseBdev2", 00:19:58.571 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:58.571 "is_configured": true, 00:19:58.571 "data_offset": 256, 00:19:58.571 "data_size": 7936 00:19:58.571 } 00:19:58.571 ] 00:19:58.571 }' 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.571 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.830 [2024-11-05 03:31:12.254944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.830 [2024-11-05 03:31:12.255149] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:58.830 [2024-11-05 03:31:12.255179] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:58.830 request: 00:19:58.830 { 00:19:58.830 "base_bdev": "BaseBdev1", 00:19:58.830 "raid_bdev": "raid_bdev1", 00:19:58.830 "method": "bdev_raid_add_base_bdev", 00:19:58.830 "req_id": 1 00:19:58.830 } 00:19:58.830 Got JSON-RPC error response 00:19:58.830 response: 00:19:58.830 { 00:19:58.830 "code": -22, 00:19:58.830 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:58.830 } 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:58.830 03:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.767 "name": "raid_bdev1", 00:19:59.767 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:19:59.767 "strip_size_kb": 0, 00:19:59.767 "state": "online", 00:19:59.767 "raid_level": "raid1", 00:19:59.767 "superblock": true, 00:19:59.767 "num_base_bdevs": 2, 00:19:59.767 "num_base_bdevs_discovered": 1, 00:19:59.767 "num_base_bdevs_operational": 1, 00:19:59.767 "base_bdevs_list": [ 00:19:59.767 { 00:19:59.767 "name": null, 00:19:59.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.767 "is_configured": false, 00:19:59.767 "data_offset": 0, 00:19:59.767 "data_size": 7936 00:19:59.767 }, 00:19:59.767 { 00:19:59.767 "name": "BaseBdev2", 00:19:59.767 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:19:59.767 "is_configured": true, 00:19:59.767 "data_offset": 256, 00:19:59.767 "data_size": 7936 00:19:59.767 } 00:19:59.767 ] 00:19:59.767 }' 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.767 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.336 "name": "raid_bdev1", 00:20:00.336 "uuid": "a7384f7e-55ac-40eb-bfe7-ef9f6cab257f", 00:20:00.336 "strip_size_kb": 0, 00:20:00.336 "state": "online", 00:20:00.336 "raid_level": "raid1", 00:20:00.336 "superblock": true, 00:20:00.336 "num_base_bdevs": 2, 00:20:00.336 "num_base_bdevs_discovered": 1, 00:20:00.336 "num_base_bdevs_operational": 1, 00:20:00.336 "base_bdevs_list": [ 00:20:00.336 { 00:20:00.336 "name": null, 00:20:00.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.336 "is_configured": false, 00:20:00.336 "data_offset": 0, 00:20:00.336 "data_size": 7936 00:20:00.336 }, 00:20:00.336 { 00:20:00.336 "name": "BaseBdev2", 00:20:00.336 "uuid": "b49db0bc-5181-5570-999f-6f093bb51372", 00:20:00.336 "is_configured": true, 00:20:00.336 "data_offset": 256, 00:20:00.336 "data_size": 7936 00:20:00.336 } 00:20:00.336 ] 00:20:00.336 }' 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87980 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87980 ']' 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87980 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:00.336 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87980 00:20:00.336 killing process with pid 87980 00:20:00.336 Received shutdown signal, test time was about 60.000000 seconds 00:20:00.336 00:20:00.336 Latency(us) 00:20:00.336 [2024-11-05T03:31:13.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.336 [2024-11-05T03:31:13.975Z] =================================================================================================================== 00:20:00.337 [2024-11-05T03:31:13.976Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.337 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:00.337 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:00.337 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87980' 00:20:00.337 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87980 00:20:00.337 [2024-11-05 03:31:13.951996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.337 03:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87980 00:20:00.337 [2024-11-05 03:31:13.952132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.337 [2024-11-05 03:31:13.952189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.337 [2024-11-05 03:31:13.952206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:00.595 [2024-11-05 03:31:14.219391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.971 03:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:01.971 00:20:01.971 real 0m21.367s 00:20:01.971 user 0m29.061s 00:20:01.971 sys 0m2.429s 00:20:01.971 03:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:01.971 ************************************ 00:20:01.971 END TEST raid_rebuild_test_sb_md_separate 00:20:01.971 ************************************ 00:20:01.971 03:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.971 03:31:15 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:01.971 03:31:15 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:01.971 03:31:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:01.971 03:31:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:01.971 03:31:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.971 ************************************ 00:20:01.971 START TEST raid_state_function_test_sb_md_interleaved 00:20:01.971 ************************************ 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:01.971 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88686 00:20:01.972 Process raid pid: 88686 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88686' 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88686 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88686 ']' 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:01.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:01.972 03:31:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.972 [2024-11-05 03:31:15.346375] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:01.972 [2024-11-05 03:31:15.346617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.972 [2024-11-05 03:31:15.531583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.230 [2024-11-05 03:31:15.645034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.230 [2024-11-05 03:31:15.842562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.230 [2024-11-05 03:31:15.842605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.798 [2024-11-05 03:31:16.307838] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.798 [2024-11-05 03:31:16.307927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.798 [2024-11-05 03:31:16.307943] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.798 [2024-11-05 03:31:16.307959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.798 "name": "Existed_Raid", 00:20:02.798 "uuid": "ad28cb95-751c-4174-8789-f7270a2e4c85", 00:20:02.798 "strip_size_kb": 0, 00:20:02.798 "state": "configuring", 00:20:02.798 "raid_level": "raid1", 00:20:02.798 "superblock": true, 00:20:02.798 "num_base_bdevs": 2, 00:20:02.798 "num_base_bdevs_discovered": 0, 00:20:02.798 "num_base_bdevs_operational": 2, 00:20:02.798 "base_bdevs_list": [ 00:20:02.798 { 00:20:02.798 "name": "BaseBdev1", 00:20:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.798 "is_configured": false, 00:20:02.798 "data_offset": 0, 00:20:02.798 "data_size": 0 00:20:02.798 }, 00:20:02.798 { 00:20:02.798 "name": "BaseBdev2", 00:20:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.798 "is_configured": false, 00:20:02.798 "data_offset": 0, 00:20:02.798 "data_size": 0 00:20:02.798 } 00:20:02.798 ] 00:20:02.798 }' 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.798 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.365 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.366 [2024-11-05 03:31:16.811978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.366 [2024-11-05 03:31:16.812037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.366 [2024-11-05 03:31:16.819968] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:03.366 [2024-11-05 03:31:16.820033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:03.366 [2024-11-05 03:31:16.820048] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.366 [2024-11-05 03:31:16.820066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.366 [2024-11-05 03:31:16.862429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.366 BaseBdev1 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.366 [ 00:20:03.366 { 00:20:03.366 "name": "BaseBdev1", 00:20:03.366 "aliases": [ 00:20:03.366 "ba891ab0-28e3-4aae-9407-02f6c7e14665" 00:20:03.366 ], 00:20:03.366 "product_name": "Malloc disk", 00:20:03.366 "block_size": 4128, 00:20:03.366 "num_blocks": 8192, 00:20:03.366 "uuid": "ba891ab0-28e3-4aae-9407-02f6c7e14665", 00:20:03.366 "md_size": 32, 00:20:03.366 "md_interleave": true, 00:20:03.366 "dif_type": 0, 00:20:03.366 "assigned_rate_limits": { 00:20:03.366 "rw_ios_per_sec": 0, 00:20:03.366 "rw_mbytes_per_sec": 0, 00:20:03.366 "r_mbytes_per_sec": 0, 00:20:03.366 "w_mbytes_per_sec": 0 00:20:03.366 }, 00:20:03.366 "claimed": true, 00:20:03.366 "claim_type": "exclusive_write", 00:20:03.366 "zoned": false, 00:20:03.366 "supported_io_types": { 00:20:03.366 "read": true, 00:20:03.366 "write": true, 00:20:03.366 "unmap": true, 00:20:03.366 "flush": true, 00:20:03.366 "reset": true, 00:20:03.366 "nvme_admin": false, 00:20:03.366 "nvme_io": false, 00:20:03.366 "nvme_io_md": false, 00:20:03.366 "write_zeroes": true, 00:20:03.366 "zcopy": true, 00:20:03.366 "get_zone_info": false, 00:20:03.366 "zone_management": false, 00:20:03.366 "zone_append": false, 00:20:03.366 "compare": false, 00:20:03.366 "compare_and_write": false, 00:20:03.366 "abort": true, 00:20:03.366 "seek_hole": false, 00:20:03.366 "seek_data": false, 00:20:03.366 "copy": true, 00:20:03.366 "nvme_iov_md": false 00:20:03.366 }, 00:20:03.366 "memory_domains": [ 00:20:03.366 { 00:20:03.366 "dma_device_id": "system", 00:20:03.366 "dma_device_type": 1 00:20:03.366 }, 00:20:03.366 { 00:20:03.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.366 "dma_device_type": 2 00:20:03.366 } 00:20:03.366 ], 00:20:03.366 "driver_specific": {} 00:20:03.366 } 00:20:03.366 ] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.366 "name": "Existed_Raid", 00:20:03.366 "uuid": "a99abac3-8874-4275-8a07-b0516f87b4c3", 00:20:03.366 "strip_size_kb": 0, 00:20:03.366 "state": "configuring", 00:20:03.366 "raid_level": "raid1", 00:20:03.366 "superblock": true, 00:20:03.366 "num_base_bdevs": 2, 00:20:03.366 "num_base_bdevs_discovered": 1, 00:20:03.366 "num_base_bdevs_operational": 2, 00:20:03.366 "base_bdevs_list": [ 00:20:03.366 { 00:20:03.366 "name": "BaseBdev1", 00:20:03.366 "uuid": "ba891ab0-28e3-4aae-9407-02f6c7e14665", 00:20:03.366 "is_configured": true, 00:20:03.366 "data_offset": 256, 00:20:03.366 "data_size": 7936 00:20:03.366 }, 00:20:03.366 { 00:20:03.366 "name": "BaseBdev2", 00:20:03.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.366 "is_configured": false, 00:20:03.366 "data_offset": 0, 00:20:03.366 "data_size": 0 00:20:03.366 } 00:20:03.366 ] 00:20:03.366 }' 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.366 03:31:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.934 [2024-11-05 03:31:17.418747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.934 [2024-11-05 03:31:17.418857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.934 [2024-11-05 03:31:17.426880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.934 [2024-11-05 03:31:17.429645] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.934 [2024-11-05 03:31:17.429728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.934 "name": "Existed_Raid", 00:20:03.934 "uuid": "6f48a155-500c-4a73-b200-016ee8dc3880", 00:20:03.934 "strip_size_kb": 0, 00:20:03.934 "state": "configuring", 00:20:03.934 "raid_level": "raid1", 00:20:03.934 "superblock": true, 00:20:03.934 "num_base_bdevs": 2, 00:20:03.934 "num_base_bdevs_discovered": 1, 00:20:03.934 "num_base_bdevs_operational": 2, 00:20:03.934 "base_bdevs_list": [ 00:20:03.934 { 00:20:03.934 "name": "BaseBdev1", 00:20:03.934 "uuid": "ba891ab0-28e3-4aae-9407-02f6c7e14665", 00:20:03.934 "is_configured": true, 00:20:03.934 "data_offset": 256, 00:20:03.934 "data_size": 7936 00:20:03.934 }, 00:20:03.934 { 00:20:03.934 "name": "BaseBdev2", 00:20:03.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.934 "is_configured": false, 00:20:03.934 "data_offset": 0, 00:20:03.934 "data_size": 0 00:20:03.934 } 00:20:03.934 ] 00:20:03.934 }' 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.934 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.502 [2024-11-05 03:31:17.971448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.502 [2024-11-05 03:31:17.971755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:04.502 [2024-11-05 03:31:17.971783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:04.502 [2024-11-05 03:31:17.971947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:04.502 [2024-11-05 03:31:17.972064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:04.502 [2024-11-05 03:31:17.972083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:04.502 [2024-11-05 03:31:17.972188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.502 BaseBdev2 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.502 03:31:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.502 [ 00:20:04.502 { 00:20:04.502 "name": "BaseBdev2", 00:20:04.502 "aliases": [ 00:20:04.502 "78843817-aad3-48eb-a4bb-76e7530e42d2" 00:20:04.502 ], 00:20:04.502 "product_name": "Malloc disk", 00:20:04.502 "block_size": 4128, 00:20:04.502 "num_blocks": 8192, 00:20:04.502 "uuid": "78843817-aad3-48eb-a4bb-76e7530e42d2", 00:20:04.502 "md_size": 32, 00:20:04.502 "md_interleave": true, 00:20:04.502 "dif_type": 0, 00:20:04.502 "assigned_rate_limits": { 00:20:04.502 "rw_ios_per_sec": 0, 00:20:04.502 "rw_mbytes_per_sec": 0, 00:20:04.502 "r_mbytes_per_sec": 0, 00:20:04.502 "w_mbytes_per_sec": 0 00:20:04.502 }, 00:20:04.502 "claimed": true, 00:20:04.502 "claim_type": "exclusive_write", 00:20:04.502 "zoned": false, 00:20:04.502 "supported_io_types": { 00:20:04.502 "read": true, 00:20:04.502 "write": true, 00:20:04.502 "unmap": true, 00:20:04.502 "flush": true, 00:20:04.502 "reset": true, 00:20:04.502 "nvme_admin": false, 00:20:04.502 "nvme_io": false, 00:20:04.502 "nvme_io_md": false, 00:20:04.502 "write_zeroes": true, 00:20:04.502 "zcopy": true, 00:20:04.502 "get_zone_info": false, 00:20:04.502 "zone_management": false, 00:20:04.502 "zone_append": false, 00:20:04.502 "compare": false, 00:20:04.502 "compare_and_write": false, 00:20:04.502 "abort": true, 00:20:04.502 "seek_hole": false, 00:20:04.502 "seek_data": false, 00:20:04.502 "copy": true, 00:20:04.502 "nvme_iov_md": false 00:20:04.502 }, 00:20:04.502 "memory_domains": [ 00:20:04.502 { 00:20:04.502 "dma_device_id": "system", 00:20:04.502 "dma_device_type": 1 00:20:04.502 }, 00:20:04.502 { 00:20:04.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.502 "dma_device_type": 2 00:20:04.502 } 00:20:04.502 ], 00:20:04.502 "driver_specific": {} 00:20:04.502 } 00:20:04.502 ] 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.502 "name": "Existed_Raid", 00:20:04.502 "uuid": "6f48a155-500c-4a73-b200-016ee8dc3880", 00:20:04.502 "strip_size_kb": 0, 00:20:04.502 "state": "online", 00:20:04.502 "raid_level": "raid1", 00:20:04.502 "superblock": true, 00:20:04.502 "num_base_bdevs": 2, 00:20:04.502 "num_base_bdevs_discovered": 2, 00:20:04.502 "num_base_bdevs_operational": 2, 00:20:04.502 "base_bdevs_list": [ 00:20:04.502 { 00:20:04.502 "name": "BaseBdev1", 00:20:04.502 "uuid": "ba891ab0-28e3-4aae-9407-02f6c7e14665", 00:20:04.502 "is_configured": true, 00:20:04.502 "data_offset": 256, 00:20:04.502 "data_size": 7936 00:20:04.502 }, 00:20:04.502 { 00:20:04.502 "name": "BaseBdev2", 00:20:04.502 "uuid": "78843817-aad3-48eb-a4bb-76e7530e42d2", 00:20:04.502 "is_configured": true, 00:20:04.502 "data_offset": 256, 00:20:04.502 "data_size": 7936 00:20:04.502 } 00:20:04.502 ] 00:20:04.502 }' 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.502 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:05.071 [2024-11-05 03:31:18.544071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:05.071 "name": "Existed_Raid", 00:20:05.071 "aliases": [ 00:20:05.071 "6f48a155-500c-4a73-b200-016ee8dc3880" 00:20:05.071 ], 00:20:05.071 "product_name": "Raid Volume", 00:20:05.071 "block_size": 4128, 00:20:05.071 "num_blocks": 7936, 00:20:05.071 "uuid": "6f48a155-500c-4a73-b200-016ee8dc3880", 00:20:05.071 "md_size": 32, 00:20:05.071 "md_interleave": true, 00:20:05.071 "dif_type": 0, 00:20:05.071 "assigned_rate_limits": { 00:20:05.071 "rw_ios_per_sec": 0, 00:20:05.071 "rw_mbytes_per_sec": 0, 00:20:05.071 "r_mbytes_per_sec": 0, 00:20:05.071 "w_mbytes_per_sec": 0 00:20:05.071 }, 00:20:05.071 "claimed": false, 00:20:05.071 "zoned": false, 00:20:05.071 "supported_io_types": { 00:20:05.071 "read": true, 00:20:05.071 "write": true, 00:20:05.071 "unmap": false, 00:20:05.071 "flush": false, 00:20:05.071 "reset": true, 00:20:05.071 "nvme_admin": false, 00:20:05.071 "nvme_io": false, 00:20:05.071 "nvme_io_md": false, 00:20:05.071 "write_zeroes": true, 00:20:05.071 "zcopy": false, 00:20:05.071 "get_zone_info": false, 00:20:05.071 "zone_management": false, 00:20:05.071 "zone_append": false, 00:20:05.071 "compare": false, 00:20:05.071 "compare_and_write": false, 00:20:05.071 "abort": false, 00:20:05.071 "seek_hole": false, 00:20:05.071 "seek_data": false, 00:20:05.071 "copy": false, 00:20:05.071 "nvme_iov_md": false 00:20:05.071 }, 00:20:05.071 "memory_domains": [ 00:20:05.071 { 00:20:05.071 "dma_device_id": "system", 00:20:05.071 "dma_device_type": 1 00:20:05.071 }, 00:20:05.071 { 00:20:05.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.071 "dma_device_type": 2 00:20:05.071 }, 00:20:05.071 { 00:20:05.071 "dma_device_id": "system", 00:20:05.071 "dma_device_type": 1 00:20:05.071 }, 00:20:05.071 { 00:20:05.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.071 "dma_device_type": 2 00:20:05.071 } 00:20:05.071 ], 00:20:05.071 "driver_specific": { 00:20:05.071 "raid": { 00:20:05.071 "uuid": "6f48a155-500c-4a73-b200-016ee8dc3880", 00:20:05.071 "strip_size_kb": 0, 00:20:05.071 "state": "online", 00:20:05.071 "raid_level": "raid1", 00:20:05.071 "superblock": true, 00:20:05.071 "num_base_bdevs": 2, 00:20:05.071 "num_base_bdevs_discovered": 2, 00:20:05.071 "num_base_bdevs_operational": 2, 00:20:05.071 "base_bdevs_list": [ 00:20:05.071 { 00:20:05.071 "name": "BaseBdev1", 00:20:05.071 "uuid": "ba891ab0-28e3-4aae-9407-02f6c7e14665", 00:20:05.071 "is_configured": true, 00:20:05.071 "data_offset": 256, 00:20:05.071 "data_size": 7936 00:20:05.071 }, 00:20:05.071 { 00:20:05.071 "name": "BaseBdev2", 00:20:05.071 "uuid": "78843817-aad3-48eb-a4bb-76e7530e42d2", 00:20:05.071 "is_configured": true, 00:20:05.071 "data_offset": 256, 00:20:05.071 "data_size": 7936 00:20:05.071 } 00:20:05.071 ] 00:20:05.071 } 00:20:05.071 } 00:20:05.071 }' 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:05.071 BaseBdev2' 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.071 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.330 [2024-11-05 03:31:18.811808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:05.330 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.331 "name": "Existed_Raid", 00:20:05.331 "uuid": "6f48a155-500c-4a73-b200-016ee8dc3880", 00:20:05.331 "strip_size_kb": 0, 00:20:05.331 "state": "online", 00:20:05.331 "raid_level": "raid1", 00:20:05.331 "superblock": true, 00:20:05.331 "num_base_bdevs": 2, 00:20:05.331 "num_base_bdevs_discovered": 1, 00:20:05.331 "num_base_bdevs_operational": 1, 00:20:05.331 "base_bdevs_list": [ 00:20:05.331 { 00:20:05.331 "name": null, 00:20:05.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.331 "is_configured": false, 00:20:05.331 "data_offset": 0, 00:20:05.331 "data_size": 7936 00:20:05.331 }, 00:20:05.331 { 00:20:05.331 "name": "BaseBdev2", 00:20:05.331 "uuid": "78843817-aad3-48eb-a4bb-76e7530e42d2", 00:20:05.331 "is_configured": true, 00:20:05.331 "data_offset": 256, 00:20:05.331 "data_size": 7936 00:20:05.331 } 00:20:05.331 ] 00:20:05.331 }' 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.331 03:31:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.898 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.898 [2024-11-05 03:31:19.482439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:05.898 [2024-11-05 03:31:19.482593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.156 [2024-11-05 03:31:19.557797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.156 [2024-11-05 03:31:19.557861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.156 [2024-11-05 03:31:19.557879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88686 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88686 ']' 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88686 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88686 00:20:06.156 killing process with pid 88686 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88686' 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88686 00:20:06.156 [2024-11-05 03:31:19.649101] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.156 03:31:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88686 00:20:06.156 [2024-11-05 03:31:19.663517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.091 ************************************ 00:20:07.091 END TEST raid_state_function_test_sb_md_interleaved 00:20:07.091 ************************************ 00:20:07.091 03:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:07.091 00:20:07.091 real 0m5.391s 00:20:07.091 user 0m8.209s 00:20:07.091 sys 0m0.781s 00:20:07.091 03:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:07.091 03:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.091 03:31:20 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:07.091 03:31:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:07.091 03:31:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:07.091 03:31:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.091 ************************************ 00:20:07.091 START TEST raid_superblock_test_md_interleaved 00:20:07.091 ************************************ 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88944 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88944 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88944 ']' 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.091 03:31:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.349 [2024-11-05 03:31:20.791604] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:07.349 [2024-11-05 03:31:20.792121] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88944 ] 00:20:07.349 [2024-11-05 03:31:20.977712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.608 [2024-11-05 03:31:21.091421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.866 [2024-11-05 03:31:21.286518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.866 [2024-11-05 03:31:21.286588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.125 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 malloc1 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 [2024-11-05 03:31:21.793047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.384 [2024-11-05 03:31:21.793130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.384 [2024-11-05 03:31:21.793161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:08.384 [2024-11-05 03:31:21.793176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.384 [2024-11-05 03:31:21.795582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.384 [2024-11-05 03:31:21.795800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.384 pt1 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 malloc2 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 [2024-11-05 03:31:21.846356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.384 [2024-11-05 03:31:21.846445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.384 [2024-11-05 03:31:21.846476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:08.384 [2024-11-05 03:31:21.846490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.384 [2024-11-05 03:31:21.848781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.384 [2024-11-05 03:31:21.848822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.384 pt2 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 [2024-11-05 03:31:21.854391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:08.384 [2024-11-05 03:31:21.856798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.384 [2024-11-05 03:31:21.857052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:08.384 [2024-11-05 03:31:21.857071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:08.384 [2024-11-05 03:31:21.857165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:08.384 [2024-11-05 03:31:21.857254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:08.384 [2024-11-05 03:31:21.857272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:08.384 [2024-11-05 03:31:21.857414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.384 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.385 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.385 "name": "raid_bdev1", 00:20:08.385 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:08.385 "strip_size_kb": 0, 00:20:08.385 "state": "online", 00:20:08.385 "raid_level": "raid1", 00:20:08.385 "superblock": true, 00:20:08.385 "num_base_bdevs": 2, 00:20:08.385 "num_base_bdevs_discovered": 2, 00:20:08.385 "num_base_bdevs_operational": 2, 00:20:08.385 "base_bdevs_list": [ 00:20:08.385 { 00:20:08.385 "name": "pt1", 00:20:08.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.385 "is_configured": true, 00:20:08.385 "data_offset": 256, 00:20:08.385 "data_size": 7936 00:20:08.385 }, 00:20:08.385 { 00:20:08.385 "name": "pt2", 00:20:08.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.385 "is_configured": true, 00:20:08.385 "data_offset": 256, 00:20:08.385 "data_size": 7936 00:20:08.385 } 00:20:08.385 ] 00:20:08.385 }' 00:20:08.385 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.385 03:31:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.952 [2024-11-05 03:31:22.382931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:08.952 "name": "raid_bdev1", 00:20:08.952 "aliases": [ 00:20:08.952 "f8e78f78-2b1c-4cbb-8592-91be21cf6473" 00:20:08.952 ], 00:20:08.952 "product_name": "Raid Volume", 00:20:08.952 "block_size": 4128, 00:20:08.952 "num_blocks": 7936, 00:20:08.952 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:08.952 "md_size": 32, 00:20:08.952 "md_interleave": true, 00:20:08.952 "dif_type": 0, 00:20:08.952 "assigned_rate_limits": { 00:20:08.952 "rw_ios_per_sec": 0, 00:20:08.952 "rw_mbytes_per_sec": 0, 00:20:08.952 "r_mbytes_per_sec": 0, 00:20:08.952 "w_mbytes_per_sec": 0 00:20:08.952 }, 00:20:08.952 "claimed": false, 00:20:08.952 "zoned": false, 00:20:08.952 "supported_io_types": { 00:20:08.952 "read": true, 00:20:08.952 "write": true, 00:20:08.952 "unmap": false, 00:20:08.952 "flush": false, 00:20:08.952 "reset": true, 00:20:08.952 "nvme_admin": false, 00:20:08.952 "nvme_io": false, 00:20:08.952 "nvme_io_md": false, 00:20:08.952 "write_zeroes": true, 00:20:08.952 "zcopy": false, 00:20:08.952 "get_zone_info": false, 00:20:08.952 "zone_management": false, 00:20:08.952 "zone_append": false, 00:20:08.952 "compare": false, 00:20:08.952 "compare_and_write": false, 00:20:08.952 "abort": false, 00:20:08.952 "seek_hole": false, 00:20:08.952 "seek_data": false, 00:20:08.952 "copy": false, 00:20:08.952 "nvme_iov_md": false 00:20:08.952 }, 00:20:08.952 "memory_domains": [ 00:20:08.952 { 00:20:08.952 "dma_device_id": "system", 00:20:08.952 "dma_device_type": 1 00:20:08.952 }, 00:20:08.952 { 00:20:08.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.952 "dma_device_type": 2 00:20:08.952 }, 00:20:08.952 { 00:20:08.952 "dma_device_id": "system", 00:20:08.952 "dma_device_type": 1 00:20:08.952 }, 00:20:08.952 { 00:20:08.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.952 "dma_device_type": 2 00:20:08.952 } 00:20:08.952 ], 00:20:08.952 "driver_specific": { 00:20:08.952 "raid": { 00:20:08.952 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:08.952 "strip_size_kb": 0, 00:20:08.952 "state": "online", 00:20:08.952 "raid_level": "raid1", 00:20:08.952 "superblock": true, 00:20:08.952 "num_base_bdevs": 2, 00:20:08.952 "num_base_bdevs_discovered": 2, 00:20:08.952 "num_base_bdevs_operational": 2, 00:20:08.952 "base_bdevs_list": [ 00:20:08.952 { 00:20:08.952 "name": "pt1", 00:20:08.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.952 "is_configured": true, 00:20:08.952 "data_offset": 256, 00:20:08.952 "data_size": 7936 00:20:08.952 }, 00:20:08.952 { 00:20:08.952 "name": "pt2", 00:20:08.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.952 "is_configured": true, 00:20:08.952 "data_offset": 256, 00:20:08.952 "data_size": 7936 00:20:08.952 } 00:20:08.952 ] 00:20:08.952 } 00:20:08.952 } 00:20:08.952 }' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:08.952 pt2' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.952 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 [2024-11-05 03:31:22.635018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f8e78f78-2b1c-4cbb-8592-91be21cf6473 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f8e78f78-2b1c-4cbb-8592-91be21cf6473 ']' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 [2024-11-05 03:31:22.674595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.212 [2024-11-05 03:31:22.674827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.212 [2024-11-05 03:31:22.674959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.212 [2024-11-05 03:31:22.675031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.212 [2024-11-05 03:31:22.675051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 [2024-11-05 03:31:22.818738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:09.212 [2024-11-05 03:31:22.821303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:09.212 [2024-11-05 03:31:22.821470] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:09.212 [2024-11-05 03:31:22.821539] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:09.212 [2024-11-05 03:31:22.821565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.212 [2024-11-05 03:31:22.821580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:09.212 request: 00:20:09.212 { 00:20:09.212 "name": "raid_bdev1", 00:20:09.212 "raid_level": "raid1", 00:20:09.212 "base_bdevs": [ 00:20:09.212 "malloc1", 00:20:09.212 "malloc2" 00:20:09.212 ], 00:20:09.212 "superblock": false, 00:20:09.212 "method": "bdev_raid_create", 00:20:09.212 "req_id": 1 00:20:09.212 } 00:20:09.212 Got JSON-RPC error response 00:20:09.212 response: 00:20:09.212 { 00:20:09.212 "code": -17, 00:20:09.212 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:09.212 } 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.471 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:09.471 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.472 [2024-11-05 03:31:22.886743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:09.472 [2024-11-05 03:31:22.886832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.472 [2024-11-05 03:31:22.886855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:09.472 [2024-11-05 03:31:22.886871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.472 [2024-11-05 03:31:22.889561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.472 [2024-11-05 03:31:22.889610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:09.472 [2024-11-05 03:31:22.889727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:09.472 [2024-11-05 03:31:22.889800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:09.472 pt1 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.472 "name": "raid_bdev1", 00:20:09.472 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:09.472 "strip_size_kb": 0, 00:20:09.472 "state": "configuring", 00:20:09.472 "raid_level": "raid1", 00:20:09.472 "superblock": true, 00:20:09.472 "num_base_bdevs": 2, 00:20:09.472 "num_base_bdevs_discovered": 1, 00:20:09.472 "num_base_bdevs_operational": 2, 00:20:09.472 "base_bdevs_list": [ 00:20:09.472 { 00:20:09.472 "name": "pt1", 00:20:09.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.472 "is_configured": true, 00:20:09.472 "data_offset": 256, 00:20:09.472 "data_size": 7936 00:20:09.472 }, 00:20:09.472 { 00:20:09.472 "name": null, 00:20:09.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.472 "is_configured": false, 00:20:09.472 "data_offset": 256, 00:20:09.472 "data_size": 7936 00:20:09.472 } 00:20:09.472 ] 00:20:09.472 }' 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.472 03:31:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.041 [2024-11-05 03:31:23.414884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:10.041 [2024-11-05 03:31:23.414971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.041 [2024-11-05 03:31:23.414999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:10.041 [2024-11-05 03:31:23.415015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.041 [2024-11-05 03:31:23.415189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.041 [2024-11-05 03:31:23.415213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:10.041 [2024-11-05 03:31:23.415267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:10.041 [2024-11-05 03:31:23.415300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.041 [2024-11-05 03:31:23.415465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:10.041 [2024-11-05 03:31:23.415501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:10.041 [2024-11-05 03:31:23.415584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:10.041 [2024-11-05 03:31:23.415732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:10.041 [2024-11-05 03:31:23.415745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:10.041 [2024-11-05 03:31:23.415833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.041 pt2 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.041 "name": "raid_bdev1", 00:20:10.041 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:10.041 "strip_size_kb": 0, 00:20:10.041 "state": "online", 00:20:10.041 "raid_level": "raid1", 00:20:10.041 "superblock": true, 00:20:10.041 "num_base_bdevs": 2, 00:20:10.041 "num_base_bdevs_discovered": 2, 00:20:10.041 "num_base_bdevs_operational": 2, 00:20:10.041 "base_bdevs_list": [ 00:20:10.041 { 00:20:10.041 "name": "pt1", 00:20:10.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.041 "is_configured": true, 00:20:10.041 "data_offset": 256, 00:20:10.041 "data_size": 7936 00:20:10.041 }, 00:20:10.041 { 00:20:10.041 "name": "pt2", 00:20:10.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.041 "is_configured": true, 00:20:10.041 "data_offset": 256, 00:20:10.041 "data_size": 7936 00:20:10.041 } 00:20:10.041 ] 00:20:10.041 }' 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.041 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.300 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.300 [2024-11-05 03:31:23.935422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.560 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.560 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.560 "name": "raid_bdev1", 00:20:10.560 "aliases": [ 00:20:10.560 "f8e78f78-2b1c-4cbb-8592-91be21cf6473" 00:20:10.560 ], 00:20:10.560 "product_name": "Raid Volume", 00:20:10.560 "block_size": 4128, 00:20:10.560 "num_blocks": 7936, 00:20:10.560 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:10.560 "md_size": 32, 00:20:10.560 "md_interleave": true, 00:20:10.560 "dif_type": 0, 00:20:10.560 "assigned_rate_limits": { 00:20:10.560 "rw_ios_per_sec": 0, 00:20:10.560 "rw_mbytes_per_sec": 0, 00:20:10.560 "r_mbytes_per_sec": 0, 00:20:10.560 "w_mbytes_per_sec": 0 00:20:10.560 }, 00:20:10.560 "claimed": false, 00:20:10.560 "zoned": false, 00:20:10.560 "supported_io_types": { 00:20:10.560 "read": true, 00:20:10.560 "write": true, 00:20:10.560 "unmap": false, 00:20:10.560 "flush": false, 00:20:10.560 "reset": true, 00:20:10.560 "nvme_admin": false, 00:20:10.560 "nvme_io": false, 00:20:10.560 "nvme_io_md": false, 00:20:10.560 "write_zeroes": true, 00:20:10.560 "zcopy": false, 00:20:10.560 "get_zone_info": false, 00:20:10.560 "zone_management": false, 00:20:10.560 "zone_append": false, 00:20:10.560 "compare": false, 00:20:10.560 "compare_and_write": false, 00:20:10.560 "abort": false, 00:20:10.560 "seek_hole": false, 00:20:10.560 "seek_data": false, 00:20:10.560 "copy": false, 00:20:10.560 "nvme_iov_md": false 00:20:10.560 }, 00:20:10.560 "memory_domains": [ 00:20:10.560 { 00:20:10.560 "dma_device_id": "system", 00:20:10.560 "dma_device_type": 1 00:20:10.560 }, 00:20:10.560 { 00:20:10.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.560 "dma_device_type": 2 00:20:10.560 }, 00:20:10.560 { 00:20:10.560 "dma_device_id": "system", 00:20:10.560 "dma_device_type": 1 00:20:10.560 }, 00:20:10.560 { 00:20:10.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.560 "dma_device_type": 2 00:20:10.560 } 00:20:10.560 ], 00:20:10.560 "driver_specific": { 00:20:10.560 "raid": { 00:20:10.560 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:10.560 "strip_size_kb": 0, 00:20:10.560 "state": "online", 00:20:10.560 "raid_level": "raid1", 00:20:10.560 "superblock": true, 00:20:10.560 "num_base_bdevs": 2, 00:20:10.560 "num_base_bdevs_discovered": 2, 00:20:10.560 "num_base_bdevs_operational": 2, 00:20:10.560 "base_bdevs_list": [ 00:20:10.560 { 00:20:10.560 "name": "pt1", 00:20:10.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.560 "is_configured": true, 00:20:10.560 "data_offset": 256, 00:20:10.560 "data_size": 7936 00:20:10.560 }, 00:20:10.560 { 00:20:10.560 "name": "pt2", 00:20:10.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.560 "is_configured": true, 00:20:10.560 "data_offset": 256, 00:20:10.560 "data_size": 7936 00:20:10.560 } 00:20:10.560 ] 00:20:10.560 } 00:20:10.560 } 00:20:10.560 }' 00:20:10.560 03:31:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:10.560 pt2' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.560 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.820 [2024-11-05 03:31:24.199449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f8e78f78-2b1c-4cbb-8592-91be21cf6473 '!=' f8e78f78-2b1c-4cbb-8592-91be21cf6473 ']' 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.820 [2024-11-05 03:31:24.243186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.820 "name": "raid_bdev1", 00:20:10.820 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:10.820 "strip_size_kb": 0, 00:20:10.820 "state": "online", 00:20:10.820 "raid_level": "raid1", 00:20:10.820 "superblock": true, 00:20:10.820 "num_base_bdevs": 2, 00:20:10.820 "num_base_bdevs_discovered": 1, 00:20:10.820 "num_base_bdevs_operational": 1, 00:20:10.820 "base_bdevs_list": [ 00:20:10.820 { 00:20:10.820 "name": null, 00:20:10.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.820 "is_configured": false, 00:20:10.820 "data_offset": 0, 00:20:10.820 "data_size": 7936 00:20:10.820 }, 00:20:10.820 { 00:20:10.820 "name": "pt2", 00:20:10.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.820 "is_configured": true, 00:20:10.820 "data_offset": 256, 00:20:10.820 "data_size": 7936 00:20:10.820 } 00:20:10.820 ] 00:20:10.820 }' 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.820 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.390 [2024-11-05 03:31:24.779236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.390 [2024-11-05 03:31:24.779457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.390 [2024-11-05 03:31:24.779655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.390 [2024-11-05 03:31:24.779865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.390 [2024-11-05 03:31:24.779993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:11.390 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.391 [2024-11-05 03:31:24.855325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:11.391 [2024-11-05 03:31:24.855386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.391 [2024-11-05 03:31:24.855411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:11.391 [2024-11-05 03:31:24.855429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.391 [2024-11-05 03:31:24.858262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.391 [2024-11-05 03:31:24.858363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:11.391 [2024-11-05 03:31:24.858469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:11.391 [2024-11-05 03:31:24.858542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.391 [2024-11-05 03:31:24.858629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:11.391 [2024-11-05 03:31:24.858652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:11.391 [2024-11-05 03:31:24.858774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:11.391 [2024-11-05 03:31:24.858915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:11.391 [2024-11-05 03:31:24.858932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:11.391 [2024-11-05 03:31:24.859020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.391 pt2 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.391 "name": "raid_bdev1", 00:20:11.391 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:11.391 "strip_size_kb": 0, 00:20:11.391 "state": "online", 00:20:11.391 "raid_level": "raid1", 00:20:11.391 "superblock": true, 00:20:11.391 "num_base_bdevs": 2, 00:20:11.391 "num_base_bdevs_discovered": 1, 00:20:11.391 "num_base_bdevs_operational": 1, 00:20:11.391 "base_bdevs_list": [ 00:20:11.391 { 00:20:11.391 "name": null, 00:20:11.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.391 "is_configured": false, 00:20:11.391 "data_offset": 256, 00:20:11.391 "data_size": 7936 00:20:11.391 }, 00:20:11.391 { 00:20:11.391 "name": "pt2", 00:20:11.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.391 "is_configured": true, 00:20:11.391 "data_offset": 256, 00:20:11.391 "data_size": 7936 00:20:11.391 } 00:20:11.391 ] 00:20:11.391 }' 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.391 03:31:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 [2024-11-05 03:31:25.395438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.959 [2024-11-05 03:31:25.395472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.959 [2024-11-05 03:31:25.395558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.959 [2024-11-05 03:31:25.395625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.959 [2024-11-05 03:31:25.395641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.959 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.960 [2024-11-05 03:31:25.459496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:11.960 [2024-11-05 03:31:25.459581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.960 [2024-11-05 03:31:25.459612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:11.960 [2024-11-05 03:31:25.459627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.960 [2024-11-05 03:31:25.462295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.960 [2024-11-05 03:31:25.462366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:11.960 [2024-11-05 03:31:25.462437] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:11.960 [2024-11-05 03:31:25.462513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:11.960 [2024-11-05 03:31:25.462647] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:11.960 [2024-11-05 03:31:25.462694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.960 [2024-11-05 03:31:25.462716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:11.960 [2024-11-05 03:31:25.462778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.960 [2024-11-05 03:31:25.462869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:11.960 [2024-11-05 03:31:25.462884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:11.960 [2024-11-05 03:31:25.463013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:11.960 [2024-11-05 03:31:25.463106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:11.960 [2024-11-05 03:31:25.463133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:11.960 [2024-11-05 03:31:25.463221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.960 pt1 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.960 "name": "raid_bdev1", 00:20:11.960 "uuid": "f8e78f78-2b1c-4cbb-8592-91be21cf6473", 00:20:11.960 "strip_size_kb": 0, 00:20:11.960 "state": "online", 00:20:11.960 "raid_level": "raid1", 00:20:11.960 "superblock": true, 00:20:11.960 "num_base_bdevs": 2, 00:20:11.960 "num_base_bdevs_discovered": 1, 00:20:11.960 "num_base_bdevs_operational": 1, 00:20:11.960 "base_bdevs_list": [ 00:20:11.960 { 00:20:11.960 "name": null, 00:20:11.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.960 "is_configured": false, 00:20:11.960 "data_offset": 256, 00:20:11.960 "data_size": 7936 00:20:11.960 }, 00:20:11.960 { 00:20:11.960 "name": "pt2", 00:20:11.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.960 "is_configured": true, 00:20:11.960 "data_offset": 256, 00:20:11.960 "data_size": 7936 00:20:11.960 } 00:20:11.960 ] 00:20:11.960 }' 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.960 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.528 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:12.528 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.528 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.528 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:12.528 03:31:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:12.528 [2024-11-05 03:31:26.023934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f8e78f78-2b1c-4cbb-8592-91be21cf6473 '!=' f8e78f78-2b1c-4cbb-8592-91be21cf6473 ']' 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88944 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88944 ']' 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88944 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88944 00:20:12.528 killing process with pid 88944 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88944' 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 88944 00:20:12.528 [2024-11-05 03:31:26.099986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.528 03:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 88944 00:20:12.528 [2024-11-05 03:31:26.100087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.528 [2024-11-05 03:31:26.100152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.528 [2024-11-05 03:31:26.100173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:12.787 [2024-11-05 03:31:26.267552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.725 ************************************ 00:20:13.725 END TEST raid_superblock_test_md_interleaved 00:20:13.725 ************************************ 00:20:13.725 03:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:13.725 00:20:13.725 real 0m6.546s 00:20:13.725 user 0m10.437s 00:20:13.725 sys 0m0.953s 00:20:13.725 03:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:13.725 03:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.725 03:31:27 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:13.725 03:31:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:13.725 03:31:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:13.725 03:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:13.725 ************************************ 00:20:13.725 START TEST raid_rebuild_test_sb_md_interleaved 00:20:13.725 ************************************ 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89268 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89268 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89268 ']' 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:13.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:13.725 03:31:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.985 [2024-11-05 03:31:27.401407] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:13.985 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:13.985 Zero copy mechanism will not be used. 00:20:13.985 [2024-11-05 03:31:27.401628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89268 ] 00:20:13.985 [2024-11-05 03:31:27.597285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.243 [2024-11-05 03:31:27.749430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.502 [2024-11-05 03:31:27.945152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.502 [2024-11-05 03:31:27.945193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.761 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:14.761 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:14.761 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:14.761 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:14.761 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.761 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.020 BaseBdev1_malloc 00:20:15.020 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.020 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 [2024-11-05 03:31:28.416084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:15.021 [2024-11-05 03:31:28.416170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.021 [2024-11-05 03:31:28.416199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:15.021 [2024-11-05 03:31:28.416217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.021 [2024-11-05 03:31:28.418897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.021 [2024-11-05 03:31:28.418979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.021 BaseBdev1 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 BaseBdev2_malloc 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 [2024-11-05 03:31:28.469336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:15.021 [2024-11-05 03:31:28.469471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.021 [2024-11-05 03:31:28.469519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:15.021 [2024-11-05 03:31:28.469541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.021 [2024-11-05 03:31:28.472134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.021 [2024-11-05 03:31:28.472180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:15.021 BaseBdev2 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 spare_malloc 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 spare_delay 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 [2024-11-05 03:31:28.540725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:15.021 [2024-11-05 03:31:28.540845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.021 [2024-11-05 03:31:28.540875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:15.021 [2024-11-05 03:31:28.540892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.021 [2024-11-05 03:31:28.543605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.021 [2024-11-05 03:31:28.543655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:15.021 spare 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 [2024-11-05 03:31:28.548798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.021 [2024-11-05 03:31:28.551217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.021 [2024-11-05 03:31:28.551525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:15.021 [2024-11-05 03:31:28.551549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:15.021 [2024-11-05 03:31:28.551663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:15.021 [2024-11-05 03:31:28.551764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:15.021 [2024-11-05 03:31:28.551779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:15.021 [2024-11-05 03:31:28.551913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.021 "name": "raid_bdev1", 00:20:15.021 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:15.021 "strip_size_kb": 0, 00:20:15.021 "state": "online", 00:20:15.021 "raid_level": "raid1", 00:20:15.021 "superblock": true, 00:20:15.021 "num_base_bdevs": 2, 00:20:15.021 "num_base_bdevs_discovered": 2, 00:20:15.021 "num_base_bdevs_operational": 2, 00:20:15.021 "base_bdevs_list": [ 00:20:15.021 { 00:20:15.021 "name": "BaseBdev1", 00:20:15.021 "uuid": "5d8d0c42-7414-5cd7-af91-0c6530dec130", 00:20:15.021 "is_configured": true, 00:20:15.021 "data_offset": 256, 00:20:15.021 "data_size": 7936 00:20:15.021 }, 00:20:15.021 { 00:20:15.021 "name": "BaseBdev2", 00:20:15.021 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:15.021 "is_configured": true, 00:20:15.021 "data_offset": 256, 00:20:15.021 "data_size": 7936 00:20:15.021 } 00:20:15.021 ] 00:20:15.021 }' 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.021 03:31:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:15.589 [2024-11-05 03:31:29.069433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.589 [2024-11-05 03:31:29.176989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.589 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.848 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.848 "name": "raid_bdev1", 00:20:15.848 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:15.848 "strip_size_kb": 0, 00:20:15.848 "state": "online", 00:20:15.848 "raid_level": "raid1", 00:20:15.848 "superblock": true, 00:20:15.848 "num_base_bdevs": 2, 00:20:15.848 "num_base_bdevs_discovered": 1, 00:20:15.848 "num_base_bdevs_operational": 1, 00:20:15.848 "base_bdevs_list": [ 00:20:15.848 { 00:20:15.848 "name": null, 00:20:15.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.848 "is_configured": false, 00:20:15.848 "data_offset": 0, 00:20:15.848 "data_size": 7936 00:20:15.848 }, 00:20:15.848 { 00:20:15.848 "name": "BaseBdev2", 00:20:15.848 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:15.848 "is_configured": true, 00:20:15.848 "data_offset": 256, 00:20:15.848 "data_size": 7936 00:20:15.848 } 00:20:15.848 ] 00:20:15.848 }' 00:20:15.848 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.848 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.107 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:16.107 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.107 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.107 [2024-11-05 03:31:29.701237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.107 [2024-11-05 03:31:29.718376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:16.107 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.107 03:31:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:16.107 [2024-11-05 03:31:29.720960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:17.493 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.493 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.493 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.494 "name": "raid_bdev1", 00:20:17.494 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:17.494 "strip_size_kb": 0, 00:20:17.494 "state": "online", 00:20:17.494 "raid_level": "raid1", 00:20:17.494 "superblock": true, 00:20:17.494 "num_base_bdevs": 2, 00:20:17.494 "num_base_bdevs_discovered": 2, 00:20:17.494 "num_base_bdevs_operational": 2, 00:20:17.494 "process": { 00:20:17.494 "type": "rebuild", 00:20:17.494 "target": "spare", 00:20:17.494 "progress": { 00:20:17.494 "blocks": 2560, 00:20:17.494 "percent": 32 00:20:17.494 } 00:20:17.494 }, 00:20:17.494 "base_bdevs_list": [ 00:20:17.494 { 00:20:17.494 "name": "spare", 00:20:17.494 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:17.494 "is_configured": true, 00:20:17.494 "data_offset": 256, 00:20:17.494 "data_size": 7936 00:20:17.494 }, 00:20:17.494 { 00:20:17.494 "name": "BaseBdev2", 00:20:17.494 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:17.494 "is_configured": true, 00:20:17.494 "data_offset": 256, 00:20:17.494 "data_size": 7936 00:20:17.494 } 00:20:17.494 ] 00:20:17.494 }' 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.494 [2024-11-05 03:31:30.874361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.494 [2024-11-05 03:31:30.929540] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:17.494 [2024-11-05 03:31:30.929642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.494 [2024-11-05 03:31:30.929666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.494 [2024-11-05 03:31:30.929685] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.494 03:31:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.494 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.494 "name": "raid_bdev1", 00:20:17.494 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:17.494 "strip_size_kb": 0, 00:20:17.494 "state": "online", 00:20:17.494 "raid_level": "raid1", 00:20:17.494 "superblock": true, 00:20:17.494 "num_base_bdevs": 2, 00:20:17.494 "num_base_bdevs_discovered": 1, 00:20:17.494 "num_base_bdevs_operational": 1, 00:20:17.494 "base_bdevs_list": [ 00:20:17.494 { 00:20:17.494 "name": null, 00:20:17.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.494 "is_configured": false, 00:20:17.494 "data_offset": 0, 00:20:17.494 "data_size": 7936 00:20:17.494 }, 00:20:17.494 { 00:20:17.494 "name": "BaseBdev2", 00:20:17.494 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:17.494 "is_configured": true, 00:20:17.494 "data_offset": 256, 00:20:17.494 "data_size": 7936 00:20:17.494 } 00:20:17.494 ] 00:20:17.494 }' 00:20:17.494 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.494 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.062 "name": "raid_bdev1", 00:20:18.062 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:18.062 "strip_size_kb": 0, 00:20:18.062 "state": "online", 00:20:18.062 "raid_level": "raid1", 00:20:18.062 "superblock": true, 00:20:18.062 "num_base_bdevs": 2, 00:20:18.062 "num_base_bdevs_discovered": 1, 00:20:18.062 "num_base_bdevs_operational": 1, 00:20:18.062 "base_bdevs_list": [ 00:20:18.062 { 00:20:18.062 "name": null, 00:20:18.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.062 "is_configured": false, 00:20:18.062 "data_offset": 0, 00:20:18.062 "data_size": 7936 00:20:18.062 }, 00:20:18.062 { 00:20:18.062 "name": "BaseBdev2", 00:20:18.062 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:18.062 "is_configured": true, 00:20:18.062 "data_offset": 256, 00:20:18.062 "data_size": 7936 00:20:18.062 } 00:20:18.062 ] 00:20:18.062 }' 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.062 [2024-11-05 03:31:31.629103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.062 [2024-11-05 03:31:31.644852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.062 03:31:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:18.063 [2024-11-05 03:31:31.647732] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.441 "name": "raid_bdev1", 00:20:19.441 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:19.441 "strip_size_kb": 0, 00:20:19.441 "state": "online", 00:20:19.441 "raid_level": "raid1", 00:20:19.441 "superblock": true, 00:20:19.441 "num_base_bdevs": 2, 00:20:19.441 "num_base_bdevs_discovered": 2, 00:20:19.441 "num_base_bdevs_operational": 2, 00:20:19.441 "process": { 00:20:19.441 "type": "rebuild", 00:20:19.441 "target": "spare", 00:20:19.441 "progress": { 00:20:19.441 "blocks": 2560, 00:20:19.441 "percent": 32 00:20:19.441 } 00:20:19.441 }, 00:20:19.441 "base_bdevs_list": [ 00:20:19.441 { 00:20:19.441 "name": "spare", 00:20:19.441 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:19.441 "is_configured": true, 00:20:19.441 "data_offset": 256, 00:20:19.441 "data_size": 7936 00:20:19.441 }, 00:20:19.441 { 00:20:19.441 "name": "BaseBdev2", 00:20:19.441 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:19.441 "is_configured": true, 00:20:19.441 "data_offset": 256, 00:20:19.441 "data_size": 7936 00:20:19.441 } 00:20:19.441 ] 00:20:19.441 }' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:19.441 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=794 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.441 "name": "raid_bdev1", 00:20:19.441 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:19.441 "strip_size_kb": 0, 00:20:19.441 "state": "online", 00:20:19.441 "raid_level": "raid1", 00:20:19.441 "superblock": true, 00:20:19.441 "num_base_bdevs": 2, 00:20:19.441 "num_base_bdevs_discovered": 2, 00:20:19.441 "num_base_bdevs_operational": 2, 00:20:19.441 "process": { 00:20:19.441 "type": "rebuild", 00:20:19.441 "target": "spare", 00:20:19.441 "progress": { 00:20:19.441 "blocks": 2816, 00:20:19.441 "percent": 35 00:20:19.441 } 00:20:19.441 }, 00:20:19.441 "base_bdevs_list": [ 00:20:19.441 { 00:20:19.441 "name": "spare", 00:20:19.441 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:19.441 "is_configured": true, 00:20:19.441 "data_offset": 256, 00:20:19.441 "data_size": 7936 00:20:19.441 }, 00:20:19.441 { 00:20:19.441 "name": "BaseBdev2", 00:20:19.441 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:19.441 "is_configured": true, 00:20:19.441 "data_offset": 256, 00:20:19.441 "data_size": 7936 00:20:19.441 } 00:20:19.441 ] 00:20:19.441 }' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.441 03:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.377 03:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.636 03:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.636 "name": "raid_bdev1", 00:20:20.636 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:20.636 "strip_size_kb": 0, 00:20:20.636 "state": "online", 00:20:20.636 "raid_level": "raid1", 00:20:20.636 "superblock": true, 00:20:20.636 "num_base_bdevs": 2, 00:20:20.636 "num_base_bdevs_discovered": 2, 00:20:20.636 "num_base_bdevs_operational": 2, 00:20:20.636 "process": { 00:20:20.636 "type": "rebuild", 00:20:20.636 "target": "spare", 00:20:20.636 "progress": { 00:20:20.636 "blocks": 5888, 00:20:20.636 "percent": 74 00:20:20.636 } 00:20:20.636 }, 00:20:20.636 "base_bdevs_list": [ 00:20:20.636 { 00:20:20.636 "name": "spare", 00:20:20.636 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:20.636 "is_configured": true, 00:20:20.636 "data_offset": 256, 00:20:20.636 "data_size": 7936 00:20:20.636 }, 00:20:20.636 { 00:20:20.636 "name": "BaseBdev2", 00:20:20.636 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:20.636 "is_configured": true, 00:20:20.636 "data_offset": 256, 00:20:20.636 "data_size": 7936 00:20:20.636 } 00:20:20.636 ] 00:20:20.636 }' 00:20:20.636 03:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.636 03:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.636 03:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.636 03:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.636 03:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:21.203 [2024-11-05 03:31:34.771111] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:21.203 [2024-11-05 03:31:34.771199] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:21.203 [2024-11-05 03:31:34.771382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.771 "name": "raid_bdev1", 00:20:21.771 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:21.771 "strip_size_kb": 0, 00:20:21.771 "state": "online", 00:20:21.771 "raid_level": "raid1", 00:20:21.771 "superblock": true, 00:20:21.771 "num_base_bdevs": 2, 00:20:21.771 "num_base_bdevs_discovered": 2, 00:20:21.771 "num_base_bdevs_operational": 2, 00:20:21.771 "base_bdevs_list": [ 00:20:21.771 { 00:20:21.771 "name": "spare", 00:20:21.771 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:21.771 "is_configured": true, 00:20:21.771 "data_offset": 256, 00:20:21.771 "data_size": 7936 00:20:21.771 }, 00:20:21.771 { 00:20:21.771 "name": "BaseBdev2", 00:20:21.771 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:21.771 "is_configured": true, 00:20:21.771 "data_offset": 256, 00:20:21.771 "data_size": 7936 00:20:21.771 } 00:20:21.771 ] 00:20:21.771 }' 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.771 "name": "raid_bdev1", 00:20:21.771 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:21.771 "strip_size_kb": 0, 00:20:21.771 "state": "online", 00:20:21.771 "raid_level": "raid1", 00:20:21.771 "superblock": true, 00:20:21.771 "num_base_bdevs": 2, 00:20:21.771 "num_base_bdevs_discovered": 2, 00:20:21.771 "num_base_bdevs_operational": 2, 00:20:21.771 "base_bdevs_list": [ 00:20:21.771 { 00:20:21.771 "name": "spare", 00:20:21.771 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:21.771 "is_configured": true, 00:20:21.771 "data_offset": 256, 00:20:21.771 "data_size": 7936 00:20:21.771 }, 00:20:21.771 { 00:20:21.771 "name": "BaseBdev2", 00:20:21.771 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:21.771 "is_configured": true, 00:20:21.771 "data_offset": 256, 00:20:21.771 "data_size": 7936 00:20:21.771 } 00:20:21.771 ] 00:20:21.771 }' 00:20:21.771 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.030 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.031 "name": "raid_bdev1", 00:20:22.031 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:22.031 "strip_size_kb": 0, 00:20:22.031 "state": "online", 00:20:22.031 "raid_level": "raid1", 00:20:22.031 "superblock": true, 00:20:22.031 "num_base_bdevs": 2, 00:20:22.031 "num_base_bdevs_discovered": 2, 00:20:22.031 "num_base_bdevs_operational": 2, 00:20:22.031 "base_bdevs_list": [ 00:20:22.031 { 00:20:22.031 "name": "spare", 00:20:22.031 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:22.031 "is_configured": true, 00:20:22.031 "data_offset": 256, 00:20:22.031 "data_size": 7936 00:20:22.031 }, 00:20:22.031 { 00:20:22.031 "name": "BaseBdev2", 00:20:22.031 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:22.031 "is_configured": true, 00:20:22.031 "data_offset": 256, 00:20:22.031 "data_size": 7936 00:20:22.031 } 00:20:22.031 ] 00:20:22.031 }' 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.031 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 [2024-11-05 03:31:35.978790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.599 [2024-11-05 03:31:35.978862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.599 [2024-11-05 03:31:35.978978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.599 [2024-11-05 03:31:35.979075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.599 [2024-11-05 03:31:35.979091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 03:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 [2024-11-05 03:31:36.050782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:22.599 [2024-11-05 03:31:36.050879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.599 [2024-11-05 03:31:36.050923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:22.599 [2024-11-05 03:31:36.050937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.599 [2024-11-05 03:31:36.053655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.599 [2024-11-05 03:31:36.053696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:22.599 [2024-11-05 03:31:36.053769] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:22.599 [2024-11-05 03:31:36.053836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.599 [2024-11-05 03:31:36.054024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:22.599 spare 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 [2024-11-05 03:31:36.154133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:22.599 [2024-11-05 03:31:36.154173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:22.599 [2024-11-05 03:31:36.154300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:22.599 [2024-11-05 03:31:36.154412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:22.599 [2024-11-05 03:31:36.154438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:22.599 [2024-11-05 03:31:36.154575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.599 "name": "raid_bdev1", 00:20:22.599 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:22.599 "strip_size_kb": 0, 00:20:22.599 "state": "online", 00:20:22.599 "raid_level": "raid1", 00:20:22.599 "superblock": true, 00:20:22.599 "num_base_bdevs": 2, 00:20:22.599 "num_base_bdevs_discovered": 2, 00:20:22.599 "num_base_bdevs_operational": 2, 00:20:22.599 "base_bdevs_list": [ 00:20:22.599 { 00:20:22.599 "name": "spare", 00:20:22.599 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:22.599 "is_configured": true, 00:20:22.599 "data_offset": 256, 00:20:22.599 "data_size": 7936 00:20:22.599 }, 00:20:22.599 { 00:20:22.599 "name": "BaseBdev2", 00:20:22.599 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:22.599 "is_configured": true, 00:20:22.599 "data_offset": 256, 00:20:22.599 "data_size": 7936 00:20:22.599 } 00:20:22.600 ] 00:20:22.600 }' 00:20:22.600 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.600 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.167 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.167 "name": "raid_bdev1", 00:20:23.167 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:23.167 "strip_size_kb": 0, 00:20:23.167 "state": "online", 00:20:23.167 "raid_level": "raid1", 00:20:23.167 "superblock": true, 00:20:23.168 "num_base_bdevs": 2, 00:20:23.168 "num_base_bdevs_discovered": 2, 00:20:23.168 "num_base_bdevs_operational": 2, 00:20:23.168 "base_bdevs_list": [ 00:20:23.168 { 00:20:23.168 "name": "spare", 00:20:23.168 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:23.168 "is_configured": true, 00:20:23.168 "data_offset": 256, 00:20:23.168 "data_size": 7936 00:20:23.168 }, 00:20:23.168 { 00:20:23.168 "name": "BaseBdev2", 00:20:23.168 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:23.168 "is_configured": true, 00:20:23.168 "data_offset": 256, 00:20:23.168 "data_size": 7936 00:20:23.168 } 00:20:23.168 ] 00:20:23.168 }' 00:20:23.168 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.168 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.168 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.427 [2024-11-05 03:31:36.911209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.427 "name": "raid_bdev1", 00:20:23.427 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:23.427 "strip_size_kb": 0, 00:20:23.427 "state": "online", 00:20:23.427 "raid_level": "raid1", 00:20:23.427 "superblock": true, 00:20:23.427 "num_base_bdevs": 2, 00:20:23.427 "num_base_bdevs_discovered": 1, 00:20:23.427 "num_base_bdevs_operational": 1, 00:20:23.427 "base_bdevs_list": [ 00:20:23.427 { 00:20:23.427 "name": null, 00:20:23.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.427 "is_configured": false, 00:20:23.427 "data_offset": 0, 00:20:23.427 "data_size": 7936 00:20:23.427 }, 00:20:23.427 { 00:20:23.427 "name": "BaseBdev2", 00:20:23.427 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:23.427 "is_configured": true, 00:20:23.427 "data_offset": 256, 00:20:23.427 "data_size": 7936 00:20:23.427 } 00:20:23.427 ] 00:20:23.427 }' 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.427 03:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.008 03:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:24.008 03:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.008 03:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.008 [2024-11-05 03:31:37.471494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.008 [2024-11-05 03:31:37.471794] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:24.008 [2024-11-05 03:31:37.471821] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:24.008 [2024-11-05 03:31:37.471920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.008 [2024-11-05 03:31:37.489029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:24.008 03:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.008 03:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:24.008 [2024-11-05 03:31:37.491713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.944 "name": "raid_bdev1", 00:20:24.944 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:24.944 "strip_size_kb": 0, 00:20:24.944 "state": "online", 00:20:24.944 "raid_level": "raid1", 00:20:24.944 "superblock": true, 00:20:24.944 "num_base_bdevs": 2, 00:20:24.944 "num_base_bdevs_discovered": 2, 00:20:24.944 "num_base_bdevs_operational": 2, 00:20:24.944 "process": { 00:20:24.944 "type": "rebuild", 00:20:24.944 "target": "spare", 00:20:24.944 "progress": { 00:20:24.944 "blocks": 2560, 00:20:24.944 "percent": 32 00:20:24.944 } 00:20:24.944 }, 00:20:24.944 "base_bdevs_list": [ 00:20:24.944 { 00:20:24.944 "name": "spare", 00:20:24.944 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:24.944 "is_configured": true, 00:20:24.944 "data_offset": 256, 00:20:24.944 "data_size": 7936 00:20:24.944 }, 00:20:24.944 { 00:20:24.944 "name": "BaseBdev2", 00:20:24.944 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:24.944 "is_configured": true, 00:20:24.944 "data_offset": 256, 00:20:24.944 "data_size": 7936 00:20:24.944 } 00:20:24.944 ] 00:20:24.944 }' 00:20:24.944 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.203 [2024-11-05 03:31:38.669147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.203 [2024-11-05 03:31:38.700211] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.203 [2024-11-05 03:31:38.700346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.203 [2024-11-05 03:31:38.700370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.203 [2024-11-05 03:31:38.700385] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.203 "name": "raid_bdev1", 00:20:25.203 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:25.203 "strip_size_kb": 0, 00:20:25.203 "state": "online", 00:20:25.203 "raid_level": "raid1", 00:20:25.203 "superblock": true, 00:20:25.203 "num_base_bdevs": 2, 00:20:25.203 "num_base_bdevs_discovered": 1, 00:20:25.203 "num_base_bdevs_operational": 1, 00:20:25.203 "base_bdevs_list": [ 00:20:25.203 { 00:20:25.203 "name": null, 00:20:25.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.203 "is_configured": false, 00:20:25.203 "data_offset": 0, 00:20:25.203 "data_size": 7936 00:20:25.203 }, 00:20:25.203 { 00:20:25.203 "name": "BaseBdev2", 00:20:25.203 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:25.203 "is_configured": true, 00:20:25.203 "data_offset": 256, 00:20:25.203 "data_size": 7936 00:20:25.203 } 00:20:25.203 ] 00:20:25.203 }' 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.203 03:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.771 03:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:25.771 03:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.771 03:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.771 [2024-11-05 03:31:39.248131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:25.771 [2024-11-05 03:31:39.248246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.771 [2024-11-05 03:31:39.248278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:25.771 [2024-11-05 03:31:39.248296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.771 [2024-11-05 03:31:39.248593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.771 [2024-11-05 03:31:39.248622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:25.771 [2024-11-05 03:31:39.248691] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:25.771 [2024-11-05 03:31:39.248713] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:25.771 [2024-11-05 03:31:39.248727] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:25.771 [2024-11-05 03:31:39.248765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.771 [2024-11-05 03:31:39.263203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:25.771 spare 00:20:25.771 03:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.771 03:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:25.771 [2024-11-05 03:31:39.265671] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.706 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.706 "name": "raid_bdev1", 00:20:26.706 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:26.706 "strip_size_kb": 0, 00:20:26.706 "state": "online", 00:20:26.706 "raid_level": "raid1", 00:20:26.706 "superblock": true, 00:20:26.706 "num_base_bdevs": 2, 00:20:26.706 "num_base_bdevs_discovered": 2, 00:20:26.707 "num_base_bdevs_operational": 2, 00:20:26.707 "process": { 00:20:26.707 "type": "rebuild", 00:20:26.707 "target": "spare", 00:20:26.707 "progress": { 00:20:26.707 "blocks": 2560, 00:20:26.707 "percent": 32 00:20:26.707 } 00:20:26.707 }, 00:20:26.707 "base_bdevs_list": [ 00:20:26.707 { 00:20:26.707 "name": "spare", 00:20:26.707 "uuid": "a9ae40da-72ac-5852-8c4a-c8b13d4f2383", 00:20:26.707 "is_configured": true, 00:20:26.707 "data_offset": 256, 00:20:26.707 "data_size": 7936 00:20:26.707 }, 00:20:26.707 { 00:20:26.707 "name": "BaseBdev2", 00:20:26.707 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:26.707 "is_configured": true, 00:20:26.707 "data_offset": 256, 00:20:26.707 "data_size": 7936 00:20:26.707 } 00:20:26.707 ] 00:20:26.707 }' 00:20:26.707 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 [2024-11-05 03:31:40.426616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.965 [2024-11-05 03:31:40.473401] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:26.965 [2024-11-05 03:31:40.473485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.965 [2024-11-05 03:31:40.473510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.965 [2024-11-05 03:31:40.473521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.965 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.966 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.966 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.966 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.966 "name": "raid_bdev1", 00:20:26.966 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:26.966 "strip_size_kb": 0, 00:20:26.966 "state": "online", 00:20:26.966 "raid_level": "raid1", 00:20:26.966 "superblock": true, 00:20:26.966 "num_base_bdevs": 2, 00:20:26.966 "num_base_bdevs_discovered": 1, 00:20:26.966 "num_base_bdevs_operational": 1, 00:20:26.966 "base_bdevs_list": [ 00:20:26.966 { 00:20:26.966 "name": null, 00:20:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.966 "is_configured": false, 00:20:26.966 "data_offset": 0, 00:20:26.966 "data_size": 7936 00:20:26.966 }, 00:20:26.966 { 00:20:26.966 "name": "BaseBdev2", 00:20:26.966 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:26.966 "is_configured": true, 00:20:26.966 "data_offset": 256, 00:20:26.966 "data_size": 7936 00:20:26.966 } 00:20:26.966 ] 00:20:26.966 }' 00:20:26.966 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.966 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.533 03:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.533 "name": "raid_bdev1", 00:20:27.533 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:27.533 "strip_size_kb": 0, 00:20:27.533 "state": "online", 00:20:27.533 "raid_level": "raid1", 00:20:27.533 "superblock": true, 00:20:27.533 "num_base_bdevs": 2, 00:20:27.533 "num_base_bdevs_discovered": 1, 00:20:27.533 "num_base_bdevs_operational": 1, 00:20:27.533 "base_bdevs_list": [ 00:20:27.533 { 00:20:27.533 "name": null, 00:20:27.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.533 "is_configured": false, 00:20:27.533 "data_offset": 0, 00:20:27.533 "data_size": 7936 00:20:27.533 }, 00:20:27.533 { 00:20:27.533 "name": "BaseBdev2", 00:20:27.533 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:27.533 "is_configured": true, 00:20:27.533 "data_offset": 256, 00:20:27.533 "data_size": 7936 00:20:27.533 } 00:20:27.533 ] 00:20:27.533 }' 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:27.533 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.791 [2024-11-05 03:31:41.183065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:27.791 [2024-11-05 03:31:41.183146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.791 [2024-11-05 03:31:41.183178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:27.791 [2024-11-05 03:31:41.183194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.791 [2024-11-05 03:31:41.183440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.791 [2024-11-05 03:31:41.183463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:27.791 [2024-11-05 03:31:41.183531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:27.791 [2024-11-05 03:31:41.183550] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:27.791 [2024-11-05 03:31:41.183564] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:27.791 [2024-11-05 03:31:41.183576] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:27.791 BaseBdev1 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.791 03:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.727 "name": "raid_bdev1", 00:20:28.727 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:28.727 "strip_size_kb": 0, 00:20:28.727 "state": "online", 00:20:28.727 "raid_level": "raid1", 00:20:28.727 "superblock": true, 00:20:28.727 "num_base_bdevs": 2, 00:20:28.727 "num_base_bdevs_discovered": 1, 00:20:28.727 "num_base_bdevs_operational": 1, 00:20:28.727 "base_bdevs_list": [ 00:20:28.727 { 00:20:28.727 "name": null, 00:20:28.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.727 "is_configured": false, 00:20:28.727 "data_offset": 0, 00:20:28.727 "data_size": 7936 00:20:28.727 }, 00:20:28.727 { 00:20:28.727 "name": "BaseBdev2", 00:20:28.727 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:28.727 "is_configured": true, 00:20:28.727 "data_offset": 256, 00:20:28.727 "data_size": 7936 00:20:28.727 } 00:20:28.727 ] 00:20:28.727 }' 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.727 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.293 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.293 "name": "raid_bdev1", 00:20:29.293 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:29.293 "strip_size_kb": 0, 00:20:29.293 "state": "online", 00:20:29.293 "raid_level": "raid1", 00:20:29.293 "superblock": true, 00:20:29.293 "num_base_bdevs": 2, 00:20:29.293 "num_base_bdevs_discovered": 1, 00:20:29.293 "num_base_bdevs_operational": 1, 00:20:29.293 "base_bdevs_list": [ 00:20:29.293 { 00:20:29.293 "name": null, 00:20:29.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.293 "is_configured": false, 00:20:29.293 "data_offset": 0, 00:20:29.293 "data_size": 7936 00:20:29.293 }, 00:20:29.293 { 00:20:29.293 "name": "BaseBdev2", 00:20:29.293 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:29.293 "is_configured": true, 00:20:29.293 "data_offset": 256, 00:20:29.293 "data_size": 7936 00:20:29.293 } 00:20:29.293 ] 00:20:29.293 }' 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.294 [2024-11-05 03:31:42.867641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.294 [2024-11-05 03:31:42.867890] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:29.294 [2024-11-05 03:31:42.867915] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:29.294 request: 00:20:29.294 { 00:20:29.294 "base_bdev": "BaseBdev1", 00:20:29.294 "raid_bdev": "raid_bdev1", 00:20:29.294 "method": "bdev_raid_add_base_bdev", 00:20:29.294 "req_id": 1 00:20:29.294 } 00:20:29.294 Got JSON-RPC error response 00:20:29.294 response: 00:20:29.294 { 00:20:29.294 "code": -22, 00:20:29.294 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:29.294 } 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:29.294 03:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.668 "name": "raid_bdev1", 00:20:30.668 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:30.668 "strip_size_kb": 0, 00:20:30.668 "state": "online", 00:20:30.668 "raid_level": "raid1", 00:20:30.668 "superblock": true, 00:20:30.668 "num_base_bdevs": 2, 00:20:30.668 "num_base_bdevs_discovered": 1, 00:20:30.668 "num_base_bdevs_operational": 1, 00:20:30.668 "base_bdevs_list": [ 00:20:30.668 { 00:20:30.668 "name": null, 00:20:30.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.668 "is_configured": false, 00:20:30.668 "data_offset": 0, 00:20:30.668 "data_size": 7936 00:20:30.668 }, 00:20:30.668 { 00:20:30.668 "name": "BaseBdev2", 00:20:30.668 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:30.668 "is_configured": true, 00:20:30.668 "data_offset": 256, 00:20:30.668 "data_size": 7936 00:20:30.668 } 00:20:30.668 ] 00:20:30.668 }' 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.668 03:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.928 "name": "raid_bdev1", 00:20:30.928 "uuid": "e4ada724-2960-4866-816b-67690cbd6779", 00:20:30.928 "strip_size_kb": 0, 00:20:30.928 "state": "online", 00:20:30.928 "raid_level": "raid1", 00:20:30.928 "superblock": true, 00:20:30.928 "num_base_bdevs": 2, 00:20:30.928 "num_base_bdevs_discovered": 1, 00:20:30.928 "num_base_bdevs_operational": 1, 00:20:30.928 "base_bdevs_list": [ 00:20:30.928 { 00:20:30.928 "name": null, 00:20:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.928 "is_configured": false, 00:20:30.928 "data_offset": 0, 00:20:30.928 "data_size": 7936 00:20:30.928 }, 00:20:30.928 { 00:20:30.928 "name": "BaseBdev2", 00:20:30.928 "uuid": "679d1f32-7c2e-5659-adbd-6edd548b60e9", 00:20:30.928 "is_configured": true, 00:20:30.928 "data_offset": 256, 00:20:30.928 "data_size": 7936 00:20:30.928 } 00:20:30.928 ] 00:20:30.928 }' 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89268 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89268 ']' 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89268 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89268 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:30.928 killing process with pid 89268 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89268' 00:20:30.928 Received shutdown signal, test time was about 60.000000 seconds 00:20:30.928 00:20:30.928 Latency(us) 00:20:30.928 [2024-11-05T03:31:44.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.928 [2024-11-05T03:31:44.567Z] =================================================================================================================== 00:20:30.928 [2024-11-05T03:31:44.567Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89268 00:20:30.928 [2024-11-05 03:31:44.554761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:30.928 03:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89268 00:20:30.928 [2024-11-05 03:31:44.554922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.928 [2024-11-05 03:31:44.554984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.928 [2024-11-05 03:31:44.555002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:31.187 [2024-11-05 03:31:44.797335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.122 03:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:32.122 00:20:32.122 real 0m18.419s 00:20:32.122 user 0m25.257s 00:20:32.122 sys 0m1.404s 00:20:32.122 03:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:32.122 ************************************ 00:20:32.122 END TEST raid_rebuild_test_sb_md_interleaved 00:20:32.122 ************************************ 00:20:32.122 03:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.122 03:31:45 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:32.122 03:31:45 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:32.122 03:31:45 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89268 ']' 00:20:32.122 03:31:45 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89268 00:20:32.382 03:31:45 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:32.382 00:20:32.382 real 12m57.027s 00:20:32.382 user 18m24.559s 00:20:32.382 sys 1m45.245s 00:20:32.382 03:31:45 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:32.382 03:31:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.382 ************************************ 00:20:32.382 END TEST bdev_raid 00:20:32.382 ************************************ 00:20:32.382 03:31:45 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:32.382 03:31:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:32.382 03:31:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:32.382 03:31:45 -- common/autotest_common.sh@10 -- # set +x 00:20:32.382 ************************************ 00:20:32.382 START TEST spdkcli_raid 00:20:32.382 ************************************ 00:20:32.382 03:31:45 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:32.382 * Looking for test storage... 00:20:32.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:32.382 03:31:45 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:32.382 03:31:45 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:20:32.382 03:31:45 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:32.382 03:31:45 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:32.382 03:31:45 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.382 03:31:45 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.382 03:31:45 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.382 03:31:45 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.382 03:31:45 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.382 03:31:45 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.382 03:31:45 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.382 03:31:46 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:32.382 03:31:46 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.382 03:31:46 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.382 --rc genhtml_branch_coverage=1 00:20:32.382 --rc genhtml_function_coverage=1 00:20:32.382 --rc genhtml_legend=1 00:20:32.382 --rc geninfo_all_blocks=1 00:20:32.382 --rc geninfo_unexecuted_blocks=1 00:20:32.382 00:20:32.382 ' 00:20:32.382 03:31:46 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.382 --rc genhtml_branch_coverage=1 00:20:32.382 --rc genhtml_function_coverage=1 00:20:32.382 --rc genhtml_legend=1 00:20:32.382 --rc geninfo_all_blocks=1 00:20:32.382 --rc geninfo_unexecuted_blocks=1 00:20:32.382 00:20:32.382 ' 00:20:32.382 03:31:46 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.382 --rc genhtml_branch_coverage=1 00:20:32.382 --rc genhtml_function_coverage=1 00:20:32.382 --rc genhtml_legend=1 00:20:32.382 --rc geninfo_all_blocks=1 00:20:32.382 --rc geninfo_unexecuted_blocks=1 00:20:32.382 00:20:32.382 ' 00:20:32.382 03:31:46 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.382 --rc genhtml_branch_coverage=1 00:20:32.382 --rc genhtml_function_coverage=1 00:20:32.382 --rc genhtml_legend=1 00:20:32.382 --rc geninfo_all_blocks=1 00:20:32.382 --rc geninfo_unexecuted_blocks=1 00:20:32.382 00:20:32.382 ' 00:20:32.382 03:31:46 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:32.382 03:31:46 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:32.382 03:31:46 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:32.382 03:31:46 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:32.382 03:31:46 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:32.382 03:31:46 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:32.382 03:31:46 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89955 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89955 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 89955 ']' 00:20:32.641 03:31:46 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:32.641 03:31:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.641 [2024-11-05 03:31:46.153866] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:32.641 [2024-11-05 03:31:46.154085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89955 ] 00:20:32.900 [2024-11-05 03:31:46.336679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:32.900 [2024-11-05 03:31:46.453457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.900 [2024-11-05 03:31:46.453475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.836 03:31:47 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:33.836 03:31:47 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:20:33.836 03:31:47 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:33.836 03:31:47 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.836 03:31:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 03:31:47 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:33.836 03:31:47 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.836 03:31:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 03:31:47 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:33.836 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:33.836 ' 00:20:35.265 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:35.265 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:35.524 03:31:48 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:35.524 03:31:48 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.524 03:31:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.524 03:31:49 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:35.524 03:31:49 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.524 03:31:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.524 03:31:49 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:35.524 ' 00:20:36.461 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:36.720 03:31:50 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:36.720 03:31:50 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.720 03:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.720 03:31:50 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:36.720 03:31:50 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.720 03:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.720 03:31:50 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:36.720 03:31:50 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:37.288 03:31:50 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:37.288 03:31:50 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:37.288 03:31:50 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:37.288 03:31:50 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.288 03:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.288 03:31:50 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:37.288 03:31:50 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.288 03:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.288 03:31:50 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:37.288 ' 00:20:38.664 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:38.664 03:31:52 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:38.664 03:31:52 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.664 03:31:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.664 03:31:52 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:38.664 03:31:52 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.664 03:31:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.664 03:31:52 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:38.664 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:38.664 ' 00:20:40.040 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:40.040 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:40.040 03:31:53 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:40.040 03:31:53 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.040 03:31:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.040 03:31:53 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89955 00:20:40.040 03:31:53 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89955 ']' 00:20:40.040 03:31:53 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89955 00:20:40.040 03:31:53 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:20:40.040 03:31:53 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:40.040 03:31:53 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89955 00:20:40.299 03:31:53 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:40.299 03:31:53 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:40.299 killing process with pid 89955 00:20:40.299 03:31:53 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89955' 00:20:40.299 03:31:53 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 89955 00:20:40.299 03:31:53 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 89955 00:20:42.202 03:31:55 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:42.202 03:31:55 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89955 ']' 00:20:42.202 03:31:55 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89955 00:20:42.202 03:31:55 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89955 ']' 00:20:42.202 03:31:55 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89955 00:20:42.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (89955) - No such process 00:20:42.202 Process with pid 89955 is not found 00:20:42.202 03:31:55 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 89955 is not found' 00:20:42.202 03:31:55 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:42.202 03:31:55 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:42.202 03:31:55 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:42.202 03:31:55 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:42.202 00:20:42.202 real 0m9.868s 00:20:42.202 user 0m20.672s 00:20:42.202 sys 0m1.048s 00:20:42.202 03:31:55 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:42.202 03:31:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.202 ************************************ 00:20:42.202 END TEST spdkcli_raid 00:20:42.202 ************************************ 00:20:42.202 03:31:55 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:42.202 03:31:55 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:42.202 03:31:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:42.202 03:31:55 -- common/autotest_common.sh@10 -- # set +x 00:20:42.202 ************************************ 00:20:42.202 START TEST blockdev_raid5f 00:20:42.202 ************************************ 00:20:42.202 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:42.202 * Looking for test storage... 00:20:42.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:42.202 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:42.202 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:20:42.202 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.461 03:31:55 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:42.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.461 --rc genhtml_branch_coverage=1 00:20:42.461 --rc genhtml_function_coverage=1 00:20:42.461 --rc genhtml_legend=1 00:20:42.461 --rc geninfo_all_blocks=1 00:20:42.461 --rc geninfo_unexecuted_blocks=1 00:20:42.461 00:20:42.461 ' 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:42.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.461 --rc genhtml_branch_coverage=1 00:20:42.461 --rc genhtml_function_coverage=1 00:20:42.461 --rc genhtml_legend=1 00:20:42.461 --rc geninfo_all_blocks=1 00:20:42.461 --rc geninfo_unexecuted_blocks=1 00:20:42.461 00:20:42.461 ' 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:42.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.461 --rc genhtml_branch_coverage=1 00:20:42.461 --rc genhtml_function_coverage=1 00:20:42.461 --rc genhtml_legend=1 00:20:42.461 --rc geninfo_all_blocks=1 00:20:42.461 --rc geninfo_unexecuted_blocks=1 00:20:42.461 00:20:42.461 ' 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:42.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.461 --rc genhtml_branch_coverage=1 00:20:42.461 --rc genhtml_function_coverage=1 00:20:42.461 --rc genhtml_legend=1 00:20:42.461 --rc geninfo_all_blocks=1 00:20:42.461 --rc geninfo_unexecuted_blocks=1 00:20:42.461 00:20:42.461 ' 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90225 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90225 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90225 ']' 00:20:42.461 03:31:55 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:42.461 03:31:55 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.462 03:31:55 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.462 03:31:55 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.462 03:31:55 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.462 03:31:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.462 [2024-11-05 03:31:56.065364] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:42.462 [2024-11-05 03:31:56.065537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90225 ] 00:20:42.720 [2024-11-05 03:31:56.251532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.979 [2024-11-05 03:31:56.361283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.546 03:31:57 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.546 03:31:57 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:20:43.546 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:43.546 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:43.546 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:43.546 03:31:57 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.546 03:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:43.546 Malloc0 00:20:43.805 Malloc1 00:20:43.805 Malloc2 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "98385b27-ea47-435c-8f19-872be0728a5d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "98385b27-ea47-435c-8f19-872be0728a5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "98385b27-ea47-435c-8f19-872be0728a5d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "bd9ce45b-b1e9-455e-a6bc-3ffcfdb9755b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "92dfb696-63a8-4265-9153-fc72d3a12dae",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b86751f7-7722-43a1-9918-bc79bfdaffc3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:43.805 03:31:57 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90225 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90225 ']' 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90225 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:43.805 03:31:57 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90225 00:20:44.064 03:31:57 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:44.064 03:31:57 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:44.064 killing process with pid 90225 00:20:44.064 03:31:57 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90225' 00:20:44.064 03:31:57 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90225 00:20:44.064 03:31:57 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90225 00:20:45.968 03:31:59 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:45.968 03:31:59 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:45.968 03:31:59 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:45.968 03:31:59 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:45.968 03:31:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.968 ************************************ 00:20:45.968 START TEST bdev_hello_world 00:20:45.968 ************************************ 00:20:45.968 03:31:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:46.226 [2024-11-05 03:31:59.697290] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:46.226 [2024-11-05 03:31:59.697490] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90287 ] 00:20:46.484 [2024-11-05 03:31:59.880066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.484 [2024-11-05 03:31:59.984254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.051 [2024-11-05 03:32:00.473370] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:47.051 [2024-11-05 03:32:00.473418] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:47.051 [2024-11-05 03:32:00.473438] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:47.051 [2024-11-05 03:32:00.474033] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:47.051 [2024-11-05 03:32:00.474233] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:47.051 [2024-11-05 03:32:00.474260] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:47.051 [2024-11-05 03:32:00.474354] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:47.051 00:20:47.051 [2024-11-05 03:32:00.474383] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:47.985 00:20:47.985 real 0m2.005s 00:20:47.985 user 0m1.585s 00:20:47.985 sys 0m0.297s 00:20:47.985 03:32:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:47.985 03:32:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:47.985 ************************************ 00:20:47.985 END TEST bdev_hello_world 00:20:47.985 ************************************ 00:20:48.244 03:32:01 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:48.244 03:32:01 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:48.244 03:32:01 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:48.244 03:32:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:48.244 ************************************ 00:20:48.244 START TEST bdev_bounds 00:20:48.244 ************************************ 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90328 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:48.244 Process bdevio pid: 90328 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90328' 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90328 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90328 ']' 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:48.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:48.244 03:32:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:48.244 [2024-11-05 03:32:01.753202] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:48.244 [2024-11-05 03:32:01.753401] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90328 ] 00:20:48.503 [2024-11-05 03:32:01.934921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.503 [2024-11-05 03:32:02.045558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.503 [2024-11-05 03:32:02.045709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.503 [2024-11-05 03:32:02.045726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.438 03:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:49.438 03:32:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:20:49.438 03:32:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:49.438 I/O targets: 00:20:49.438 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:49.438 00:20:49.438 00:20:49.438 CUnit - A unit testing framework for C - Version 2.1-3 00:20:49.438 http://cunit.sourceforge.net/ 00:20:49.438 00:20:49.438 00:20:49.438 Suite: bdevio tests on: raid5f 00:20:49.439 Test: blockdev write read block ...passed 00:20:49.439 Test: blockdev write zeroes read block ...passed 00:20:49.439 Test: blockdev write zeroes read no split ...passed 00:20:49.439 Test: blockdev write zeroes read split ...passed 00:20:49.439 Test: blockdev write zeroes read split partial ...passed 00:20:49.439 Test: blockdev reset ...passed 00:20:49.439 Test: blockdev write read 8 blocks ...passed 00:20:49.439 Test: blockdev write read size > 128k ...passed 00:20:49.439 Test: blockdev write read invalid size ...passed 00:20:49.439 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:49.439 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:49.439 Test: blockdev write read max offset ...passed 00:20:49.439 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:49.439 Test: blockdev writev readv 8 blocks ...passed 00:20:49.439 Test: blockdev writev readv 30 x 1block ...passed 00:20:49.439 Test: blockdev writev readv block ...passed 00:20:49.439 Test: blockdev writev readv size > 128k ...passed 00:20:49.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:49.697 Test: blockdev comparev and writev ...passed 00:20:49.697 Test: blockdev nvme passthru rw ...passed 00:20:49.697 Test: blockdev nvme passthru vendor specific ...passed 00:20:49.697 Test: blockdev nvme admin passthru ...passed 00:20:49.697 Test: blockdev copy ...passed 00:20:49.697 00:20:49.697 Run Summary: Type Total Ran Passed Failed Inactive 00:20:49.697 suites 1 1 n/a 0 0 00:20:49.697 tests 23 23 23 0 0 00:20:49.697 asserts 130 130 130 0 n/a 00:20:49.697 00:20:49.697 Elapsed time = 0.493 seconds 00:20:49.697 0 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90328 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90328 ']' 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90328 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90328 00:20:49.697 killing process with pid 90328 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90328' 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90328 00:20:49.697 03:32:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90328 00:20:51.071 03:32:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:51.071 00:20:51.071 real 0m2.641s 00:20:51.071 user 0m6.684s 00:20:51.071 sys 0m0.414s 00:20:51.071 03:32:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:51.071 03:32:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:51.071 ************************************ 00:20:51.071 END TEST bdev_bounds 00:20:51.071 ************************************ 00:20:51.071 03:32:04 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:51.071 03:32:04 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:51.071 03:32:04 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:51.071 03:32:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:51.071 ************************************ 00:20:51.071 START TEST bdev_nbd 00:20:51.071 ************************************ 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90383 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90383 /var/tmp/spdk-nbd.sock 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90383 ']' 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:51.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:51.071 03:32:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:51.071 [2024-11-05 03:32:04.435714] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:51.071 [2024-11-05 03:32:04.435878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.071 [2024-11-05 03:32:04.610761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.330 [2024-11-05 03:32:04.725563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:51.898 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.157 1+0 records in 00:20:52.157 1+0 records out 00:20:52.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033122 s, 12.4 MB/s 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:52.157 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:52.416 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:52.416 { 00:20:52.416 "nbd_device": "/dev/nbd0", 00:20:52.416 "bdev_name": "raid5f" 00:20:52.416 } 00:20:52.416 ]' 00:20:52.416 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:52.416 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:52.416 { 00:20:52.416 "nbd_device": "/dev/nbd0", 00:20:52.416 "bdev_name": "raid5f" 00:20:52.416 } 00:20:52.416 ]' 00:20:52.416 03:32:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:52.416 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:52.416 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.416 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:52.416 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:52.416 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:52.416 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.416 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.675 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:52.933 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:52.933 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:52.933 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.192 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:53.451 /dev/nbd0 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.451 1+0 records in 00:20:53.451 1+0 records out 00:20:53.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030596 s, 13.4 MB/s 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.451 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.452 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:53.452 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.452 03:32:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:53.710 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:53.710 { 00:20:53.710 "nbd_device": "/dev/nbd0", 00:20:53.710 "bdev_name": "raid5f" 00:20:53.710 } 00:20:53.710 ]' 00:20:53.710 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:53.710 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:53.711 { 00:20:53.711 "nbd_device": "/dev/nbd0", 00:20:53.711 "bdev_name": "raid5f" 00:20:53.711 } 00:20:53.711 ]' 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:53.711 256+0 records in 00:20:53.711 256+0 records out 00:20:53.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469039 s, 224 MB/s 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:53.711 256+0 records in 00:20:53.711 256+0 records out 00:20:53.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0361982 s, 29.0 MB/s 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.711 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:53.969 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.228 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:54.228 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:54.228 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:54.228 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:54.486 03:32:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:54.745 malloc_lvol_verify 00:20:54.745 03:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:55.003 e538daea-1772-4cfa-8230-ab13286ea517 00:20:55.003 03:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:55.262 88ed14d1-1114-47fc-b449-48019b1e3e2f 00:20:55.262 03:32:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:55.521 /dev/nbd0 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:55.521 mke2fs 1.47.0 (5-Feb-2023) 00:20:55.521 Discarding device blocks: 0/4096 done 00:20:55.521 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:55.521 00:20:55.521 Allocating group tables: 0/1 done 00:20:55.521 Writing inode tables: 0/1 done 00:20:55.521 Creating journal (1024 blocks): done 00:20:55.521 Writing superblocks and filesystem accounting information: 0/1 done 00:20:55.521 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.521 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90383 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90383 ']' 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90383 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90383 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90383' 00:20:55.781 killing process with pid 90383 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90383 00:20:55.781 03:32:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90383 00:20:57.199 03:32:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:57.199 00:20:57.199 real 0m6.225s 00:20:57.199 user 0m9.010s 00:20:57.199 sys 0m1.346s 00:20:57.199 03:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:57.199 03:32:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:57.199 ************************************ 00:20:57.199 END TEST bdev_nbd 00:20:57.199 ************************************ 00:20:57.199 03:32:10 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:57.199 03:32:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:57.199 03:32:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:57.199 03:32:10 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:57.199 03:32:10 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:57.199 03:32:10 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:57.199 03:32:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:57.199 ************************************ 00:20:57.199 START TEST bdev_fio 00:20:57.199 ************************************ 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:57.199 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:57.199 ************************************ 00:20:57.199 START TEST bdev_fio_rw_verify 00:20:57.199 ************************************ 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:57.199 03:32:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:57.458 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:57.458 fio-3.35 00:20:57.458 Starting 1 thread 00:21:09.662 00:21:09.662 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90592: Tue Nov 5 03:32:21 2024 00:21:09.662 read: IOPS=9331, BW=36.5MiB/s (38.2MB/s)(365MiB/10001msec) 00:21:09.662 slat (usec): min=20, max=318, avg=25.89, stdev= 7.64 00:21:09.662 clat (usec): min=12, max=771, avg=170.17, stdev=65.93 00:21:09.662 lat (usec): min=36, max=800, avg=196.06, stdev=67.38 00:21:09.662 clat percentiles (usec): 00:21:09.662 | 50.000th=[ 167], 99.000th=[ 322], 99.900th=[ 465], 99.990th=[ 644], 00:21:09.662 | 99.999th=[ 775] 00:21:09.662 write: IOPS=9774, BW=38.2MiB/s (40.0MB/s)(377MiB/9885msec); 0 zone resets 00:21:09.662 slat (usec): min=10, max=1565, avg=21.84, stdev= 8.91 00:21:09.662 clat (usec): min=67, max=2105, avg=392.05, stdev=66.63 00:21:09.662 lat (usec): min=85, max=2132, avg=413.89, stdev=68.98 00:21:09.662 clat percentiles (usec): 00:21:09.662 | 50.000th=[ 388], 99.000th=[ 578], 99.900th=[ 750], 99.990th=[ 1074], 00:21:09.662 | 99.999th=[ 2114] 00:21:09.662 bw ( KiB/s): min=33776, max=40912, per=98.35%, avg=38455.58, stdev=1718.87, samples=19 00:21:09.662 iops : min= 8444, max=10228, avg=9613.89, stdev=429.72, samples=19 00:21:09.662 lat (usec) : 20=0.01%, 50=0.01%, 100=9.05%, 250=34.47%, 500=53.87% 00:21:09.662 lat (usec) : 750=2.56%, 1000=0.04% 00:21:09.662 lat (msec) : 2=0.01%, 4=0.01% 00:21:09.662 cpu : usr=97.95%, sys=0.89%, ctx=24, majf=0, minf=7973 00:21:09.662 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.662 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.662 issued rwts: total=93326,96622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.662 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.662 00:21:09.662 Run status group 0 (all jobs): 00:21:09.662 READ: bw=36.5MiB/s (38.2MB/s), 36.5MiB/s-36.5MiB/s (38.2MB/s-38.2MB/s), io=365MiB (382MB), run=10001-10001msec 00:21:09.662 WRITE: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=377MiB (396MB), run=9885-9885msec 00:21:09.662 ----------------------------------------------------- 00:21:09.662 Suppressions used: 00:21:09.662 count bytes template 00:21:09.662 1 7 /usr/src/fio/parse.c 00:21:09.662 391 37536 /usr/src/fio/iolog.c 00:21:09.662 1 8 libtcmalloc_minimal.so 00:21:09.662 1 904 libcrypto.so 00:21:09.662 ----------------------------------------------------- 00:21:09.662 00:21:09.662 00:21:09.662 real 0m12.494s 00:21:09.662 user 0m12.599s 00:21:09.662 sys 0m0.754s 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:09.662 ************************************ 00:21:09.662 END TEST bdev_fio_rw_verify 00:21:09.662 ************************************ 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:21:09.662 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "98385b27-ea47-435c-8f19-872be0728a5d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "98385b27-ea47-435c-8f19-872be0728a5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "98385b27-ea47-435c-8f19-872be0728a5d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "bd9ce45b-b1e9-455e-a6bc-3ffcfdb9755b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "92dfb696-63a8-4265-9153-fc72d3a12dae",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b86751f7-7722-43a1-9918-bc79bfdaffc3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:09.663 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:09.922 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:09.922 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:09.922 /home/vagrant/spdk_repo/spdk 00:21:09.922 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:09.922 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:09.922 03:32:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:09.922 00:21:09.922 real 0m12.713s 00:21:09.922 user 0m12.702s 00:21:09.922 sys 0m0.843s 00:21:09.922 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:09.922 ************************************ 00:21:09.922 END TEST bdev_fio 00:21:09.922 ************************************ 00:21:09.922 03:32:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:09.922 03:32:23 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:09.922 03:32:23 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:09.922 03:32:23 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:09.922 03:32:23 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:09.922 03:32:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:09.922 ************************************ 00:21:09.922 START TEST bdev_verify 00:21:09.922 ************************************ 00:21:09.922 03:32:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:09.922 [2024-11-05 03:32:23.503446] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:09.922 [2024-11-05 03:32:23.503623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90745 ] 00:21:10.181 [2024-11-05 03:32:23.686191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:10.181 [2024-11-05 03:32:23.793944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.181 [2024-11-05 03:32:23.793958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.749 Running I/O for 5 seconds... 00:21:12.694 13443.00 IOPS, 52.51 MiB/s [2024-11-05T03:32:27.709Z] 14292.50 IOPS, 55.83 MiB/s [2024-11-05T03:32:28.646Z] 14645.67 IOPS, 57.21 MiB/s [2024-11-05T03:32:29.583Z] 14769.75 IOPS, 57.69 MiB/s [2024-11-05T03:32:29.583Z] 14757.00 IOPS, 57.64 MiB/s 00:21:15.944 Latency(us) 00:21:15.944 [2024-11-05T03:32:29.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.944 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:15.944 Verification LBA range: start 0x0 length 0x2000 00:21:15.944 raid5f : 5.01 7371.33 28.79 0.00 0.00 26139.10 297.89 22758.87 00:21:15.944 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:15.944 Verification LBA range: start 0x2000 length 0x2000 00:21:15.944 raid5f : 5.02 7381.61 28.83 0.00 0.00 26074.37 109.85 23116.33 00:21:15.944 [2024-11-05T03:32:29.583Z] =================================================================================================================== 00:21:15.944 [2024-11-05T03:32:29.583Z] Total : 14752.94 57.63 0.00 0.00 26106.69 109.85 23116.33 00:21:17.321 00:21:17.321 real 0m7.168s 00:21:17.321 user 0m13.182s 00:21:17.321 sys 0m0.309s 00:21:17.321 03:32:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:17.321 03:32:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:17.321 ************************************ 00:21:17.321 END TEST bdev_verify 00:21:17.321 ************************************ 00:21:17.321 03:32:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:17.321 03:32:30 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:17.321 03:32:30 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:17.321 03:32:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:17.321 ************************************ 00:21:17.321 START TEST bdev_verify_big_io 00:21:17.321 ************************************ 00:21:17.321 03:32:30 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:17.321 [2024-11-05 03:32:30.737121] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:17.321 [2024-11-05 03:32:30.737708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90838 ] 00:21:17.321 [2024-11-05 03:32:30.920566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:17.580 [2024-11-05 03:32:31.046123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.580 [2024-11-05 03:32:31.046127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.148 Running I/O for 5 seconds... 00:21:20.023 630.00 IOPS, 39.38 MiB/s [2024-11-05T03:32:35.040Z] 761.00 IOPS, 47.56 MiB/s [2024-11-05T03:32:35.974Z] 761.33 IOPS, 47.58 MiB/s [2024-11-05T03:32:36.935Z] 824.50 IOPS, 51.53 MiB/s [2024-11-05T03:32:36.935Z] 850.20 IOPS, 53.14 MiB/s 00:21:23.296 Latency(us) 00:21:23.296 [2024-11-05T03:32:36.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.296 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:23.296 Verification LBA range: start 0x0 length 0x200 00:21:23.296 raid5f : 5.23 436.43 27.28 0.00 0.00 7244195.04 202.01 341263.83 00:21:23.296 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:23.296 Verification LBA range: start 0x200 length 0x200 00:21:23.296 raid5f : 5.09 424.49 26.53 0.00 0.00 7465245.26 182.46 345076.83 00:21:23.296 [2024-11-05T03:32:36.935Z] =================================================================================================================== 00:21:23.296 [2024-11-05T03:32:36.935Z] Total : 860.92 53.81 0.00 0.00 7351610.62 182.46 345076.83 00:21:24.672 00:21:24.672 real 0m7.448s 00:21:24.672 user 0m13.694s 00:21:24.672 sys 0m0.324s 00:21:24.672 03:32:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:24.672 ************************************ 00:21:24.672 END TEST bdev_verify_big_io 00:21:24.672 03:32:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.672 ************************************ 00:21:24.672 03:32:38 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:24.672 03:32:38 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:24.672 03:32:38 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:24.672 03:32:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:24.672 ************************************ 00:21:24.672 START TEST bdev_write_zeroes 00:21:24.672 ************************************ 00:21:24.672 03:32:38 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:24.672 [2024-11-05 03:32:38.235595] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:24.672 [2024-11-05 03:32:38.236149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90936 ] 00:21:24.931 [2024-11-05 03:32:38.405587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.931 [2024-11-05 03:32:38.516781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.497 Running I/O for 1 seconds... 00:21:26.429 20967.00 IOPS, 81.90 MiB/s 00:21:26.429 Latency(us) 00:21:26.429 [2024-11-05T03:32:40.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.429 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.429 raid5f : 1.01 20921.58 81.72 0.00 0.00 6092.47 1936.29 9294.20 00:21:26.429 [2024-11-05T03:32:40.068Z] =================================================================================================================== 00:21:26.429 [2024-11-05T03:32:40.068Z] Total : 20921.58 81.72 0.00 0.00 6092.47 1936.29 9294.20 00:21:27.805 00:21:27.805 real 0m3.080s 00:21:27.805 user 0m2.649s 00:21:27.805 sys 0m0.297s 00:21:27.805 03:32:41 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.805 03:32:41 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:27.805 ************************************ 00:21:27.805 END TEST bdev_write_zeroes 00:21:27.805 ************************************ 00:21:27.805 03:32:41 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.805 03:32:41 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:27.805 03:32:41 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.805 03:32:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:27.805 ************************************ 00:21:27.805 START TEST bdev_json_nonenclosed 00:21:27.805 ************************************ 00:21:27.805 03:32:41 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.805 [2024-11-05 03:32:41.369349] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:27.805 [2024-11-05 03:32:41.369592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90985 ] 00:21:28.063 [2024-11-05 03:32:41.554641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.063 [2024-11-05 03:32:41.678870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.063 [2024-11-05 03:32:41.679130] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:28.063 [2024-11-05 03:32:41.679178] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:28.063 [2024-11-05 03:32:41.679195] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:28.321 00:21:28.321 real 0m0.669s 00:21:28.321 user 0m0.423s 00:21:28.321 sys 0m0.140s 00:21:28.321 03:32:41 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:28.321 ************************************ 00:21:28.321 END TEST bdev_json_nonenclosed 00:21:28.321 ************************************ 00:21:28.321 03:32:41 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:28.578 03:32:41 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.578 03:32:41 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:28.578 03:32:41 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:28.578 03:32:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:28.578 ************************************ 00:21:28.578 START TEST bdev_json_nonarray 00:21:28.578 ************************************ 00:21:28.578 03:32:41 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.578 [2024-11-05 03:32:42.085857] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:28.578 [2024-11-05 03:32:42.086065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91016 ] 00:21:28.835 [2024-11-05 03:32:42.267560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.835 [2024-11-05 03:32:42.386847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.835 [2024-11-05 03:32:42.386976] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:28.835 [2024-11-05 03:32:42.387004] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:28.835 [2024-11-05 03:32:42.387027] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:29.093 00:21:29.093 real 0m0.637s 00:21:29.093 user 0m0.395s 00:21:29.093 sys 0m0.136s 00:21:29.093 ************************************ 00:21:29.093 END TEST bdev_json_nonarray 00:21:29.093 ************************************ 00:21:29.093 03:32:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:29.093 03:32:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:29.093 03:32:42 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:29.093 00:21:29.093 real 0m46.928s 00:21:29.093 user 1m4.335s 00:21:29.093 sys 0m5.050s 00:21:29.093 ************************************ 00:21:29.093 END TEST blockdev_raid5f 00:21:29.093 ************************************ 00:21:29.093 03:32:42 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:29.093 03:32:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:29.093 03:32:42 -- spdk/autotest.sh@194 -- # uname -s 00:21:29.093 03:32:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:29.093 03:32:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:29.093 03:32:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:29.093 03:32:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:29.093 03:32:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:21:29.093 03:32:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:21:29.093 03:32:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:29.093 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:29.351 03:32:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:29.351 03:32:42 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:29.351 03:32:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:29.351 03:32:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:29.351 03:32:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:29.351 03:32:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:29.351 03:32:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:29.351 03:32:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.351 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:29.351 03:32:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:29.351 03:32:42 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:29.351 03:32:42 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:29.351 03:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:31.250 INFO: APP EXITING 00:21:31.250 INFO: killing all VMs 00:21:31.250 INFO: killing vhost app 00:21:31.250 INFO: EXIT DONE 00:21:31.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.250 Waiting for block devices as requested 00:21:31.250 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.508 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:32.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.076 Cleaning 00:21:32.076 Removing: /var/run/dpdk/spdk0/config 00:21:32.076 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:32.076 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:32.076 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:32.076 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:32.076 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:32.076 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:32.076 Removing: /dev/shm/spdk_tgt_trace.pid56716 00:21:32.076 Removing: /var/run/dpdk/spdk0 00:21:32.076 Removing: /var/run/dpdk/spdk_pid56492 00:21:32.076 Removing: /var/run/dpdk/spdk_pid56716 00:21:32.076 Removing: /var/run/dpdk/spdk_pid56945 00:21:32.076 Removing: /var/run/dpdk/spdk_pid57049 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57100 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57228 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57246 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57450 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57555 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57657 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57780 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57888 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57922 00:21:32.371 Removing: /var/run/dpdk/spdk_pid57964 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58029 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58135 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58604 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58674 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58748 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58764 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58907 00:21:32.371 Removing: /var/run/dpdk/spdk_pid58930 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59076 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59097 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59163 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59181 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59245 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59263 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59458 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59495 00:21:32.371 Removing: /var/run/dpdk/spdk_pid59584 00:21:32.371 Removing: /var/run/dpdk/spdk_pid60932 00:21:32.371 Removing: /var/run/dpdk/spdk_pid61144 00:21:32.371 Removing: /var/run/dpdk/spdk_pid61295 00:21:32.371 Removing: /var/run/dpdk/spdk_pid61944 00:21:32.371 Removing: /var/run/dpdk/spdk_pid62161 00:21:32.371 Removing: /var/run/dpdk/spdk_pid62301 00:21:32.371 Removing: /var/run/dpdk/spdk_pid62955 00:21:32.371 Removing: /var/run/dpdk/spdk_pid63291 00:21:32.371 Removing: /var/run/dpdk/spdk_pid63431 00:21:32.371 Removing: /var/run/dpdk/spdk_pid64849 00:21:32.371 Removing: /var/run/dpdk/spdk_pid65108 00:21:32.371 Removing: /var/run/dpdk/spdk_pid65259 00:21:32.372 Removing: /var/run/dpdk/spdk_pid66666 00:21:32.372 Removing: /var/run/dpdk/spdk_pid66925 00:21:32.372 Removing: /var/run/dpdk/spdk_pid67069 00:21:32.372 Removing: /var/run/dpdk/spdk_pid68481 00:21:32.372 Removing: /var/run/dpdk/spdk_pid68932 00:21:32.372 Removing: /var/run/dpdk/spdk_pid69078 00:21:32.372 Removing: /var/run/dpdk/spdk_pid70585 00:21:32.372 Removing: /var/run/dpdk/spdk_pid70851 00:21:32.372 Removing: /var/run/dpdk/spdk_pid70997 00:21:32.372 Removing: /var/run/dpdk/spdk_pid72509 00:21:32.372 Removing: /var/run/dpdk/spdk_pid72775 00:21:32.372 Removing: /var/run/dpdk/spdk_pid72921 00:21:32.372 Removing: /var/run/dpdk/spdk_pid74433 00:21:32.372 Removing: /var/run/dpdk/spdk_pid74927 00:21:32.372 Removing: /var/run/dpdk/spdk_pid75073 00:21:32.372 Removing: /var/run/dpdk/spdk_pid75211 00:21:32.372 Removing: /var/run/dpdk/spdk_pid75662 00:21:32.372 Removing: /var/run/dpdk/spdk_pid76431 00:21:32.372 Removing: /var/run/dpdk/spdk_pid76832 00:21:32.372 Removing: /var/run/dpdk/spdk_pid77551 00:21:32.372 Removing: /var/run/dpdk/spdk_pid78019 00:21:32.372 Removing: /var/run/dpdk/spdk_pid78808 00:21:32.372 Removing: /var/run/dpdk/spdk_pid79230 00:21:32.372 Removing: /var/run/dpdk/spdk_pid81234 00:21:32.372 Removing: /var/run/dpdk/spdk_pid81686 00:21:32.372 Removing: /var/run/dpdk/spdk_pid82143 00:21:32.372 Removing: /var/run/dpdk/spdk_pid84283 00:21:32.372 Removing: /var/run/dpdk/spdk_pid84774 00:21:32.372 Removing: /var/run/dpdk/spdk_pid85283 00:21:32.372 Removing: /var/run/dpdk/spdk_pid86358 00:21:32.372 Removing: /var/run/dpdk/spdk_pid86692 00:21:32.372 Removing: /var/run/dpdk/spdk_pid87656 00:21:32.372 Removing: /var/run/dpdk/spdk_pid87980 00:21:32.372 Removing: /var/run/dpdk/spdk_pid88944 00:21:32.372 Removing: /var/run/dpdk/spdk_pid89268 00:21:32.372 Removing: /var/run/dpdk/spdk_pid89955 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90225 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90287 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90328 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90577 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90745 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90838 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90936 00:21:32.372 Removing: /var/run/dpdk/spdk_pid90985 00:21:32.372 Removing: /var/run/dpdk/spdk_pid91016 00:21:32.372 Clean 00:21:32.630 03:32:46 -- common/autotest_common.sh@1451 -- # return 0 00:21:32.630 03:32:46 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:32.630 03:32:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.630 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.630 03:32:46 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:32.630 03:32:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.630 03:32:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.630 03:32:46 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:32.630 03:32:46 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:32.630 03:32:46 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:32.630 03:32:46 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:32.630 03:32:46 -- spdk/autotest.sh@394 -- # hostname 00:21:32.630 03:32:46 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:32.888 geninfo: WARNING: invalid characters removed from testname! 00:21:59.443 03:33:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.443 03:33:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.737 03:33:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.639 03:33:18 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:07.172 03:33:20 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.705 03:33:23 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.240 03:33:25 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:12.240 03:33:25 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:12.240 03:33:25 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:12.240 03:33:25 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:12.240 03:33:25 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:12.240 03:33:25 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:12.499 + [[ -n 5206 ]] 00:22:12.499 + sudo kill 5206 00:22:12.508 [Pipeline] } 00:22:12.519 [Pipeline] // timeout 00:22:12.524 [Pipeline] } 00:22:12.536 [Pipeline] // stage 00:22:12.541 [Pipeline] } 00:22:12.553 [Pipeline] // catchError 00:22:12.561 [Pipeline] stage 00:22:12.563 [Pipeline] { (Stop VM) 00:22:12.577 [Pipeline] sh 00:22:12.857 + vagrant halt 00:22:16.145 ==> default: Halting domain... 00:22:22.723 [Pipeline] sh 00:22:23.053 + vagrant destroy -f 00:22:26.340 ==> default: Removing domain... 00:22:26.351 [Pipeline] sh 00:22:26.631 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:26.640 [Pipeline] } 00:22:26.655 [Pipeline] // stage 00:22:26.660 [Pipeline] } 00:22:26.675 [Pipeline] // dir 00:22:26.680 [Pipeline] } 00:22:26.695 [Pipeline] // wrap 00:22:26.702 [Pipeline] } 00:22:26.714 [Pipeline] // catchError 00:22:26.724 [Pipeline] stage 00:22:26.726 [Pipeline] { (Epilogue) 00:22:26.740 [Pipeline] sh 00:22:27.022 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:32.311 [Pipeline] catchError 00:22:32.313 [Pipeline] { 00:22:32.326 [Pipeline] sh 00:22:32.606 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:32.607 Artifacts sizes are good 00:22:32.616 [Pipeline] } 00:22:32.629 [Pipeline] // catchError 00:22:32.640 [Pipeline] archiveArtifacts 00:22:32.647 Archiving artifacts 00:22:32.774 [Pipeline] cleanWs 00:22:32.787 [WS-CLEANUP] Deleting project workspace... 00:22:32.787 [WS-CLEANUP] Deferred wipeout is used... 00:22:32.809 [WS-CLEANUP] done 00:22:32.811 [Pipeline] } 00:22:32.826 [Pipeline] // stage 00:22:32.832 [Pipeline] } 00:22:32.846 [Pipeline] // node 00:22:32.852 [Pipeline] End of Pipeline 00:22:32.891 Finished: SUCCESS